Have to say, I was thoroughly impressed by what Apple showed today with all this Personal AI stuff. And it proves that the real power of consumer AI will be in the hands of the platform owners where you have most of your digital life in already (Apple or Google for messaging, mail, photos, apps; Microsoft for work and/or life).

The way Siri can now perform actions based on context from emails and messages like setting calendar and reservations or asking about someone’s flight is so useful (can’t tell you how many times my brother didn’t bother to check the flight code I sent him via message when he asks me when I’m landing for pickup!).

I always saw this level of personal intelligence to come about at some point, but I didn’t expect Apple to hit it out of the park so strongly. Benefit of drawing people into their ecosystem.

Nevermind all the thought put into private cloud, integration with ChatGPT, the image generation playground, and Genmoji. I can genuinely see all this being useful for “the rest of us,” to quote Craig. As someone who’s taken a pessimistic view of Apple software innovation the last several years, I’m amazed.

One caveat: the image generation of real people was super uncanny and made me uncomfortable. I would not be happy to receive one of those cold and impersonal, low-effort images as a birthday wish.

> I always saw this level of personal intelligence to come about at some point, but I didn’t expect Apple to hit it out of the park so strongly. Benefit of drawing people into their ecosystem.

It's the benefit of how Apple does product ownership. In contrast to Google and Microsoft.

I hadn't considered it, but AI convergence is going to lay bare organizational deficiencies in a way previous revolutions didn't.

Nobody wants a GenAI feature that works in Gmail, a different one that works in Messages, etc. -- they want a platform capability that works anywhere they use text.

I'm not sure either Google or Microsoft are organizationally-capable of delivering that, at this point.

"AI convergence is going to lay bare organizational deficiencies in a way previous revolutions didn't`'

Your quote really hit me. I trust Apple to respect my privacy when doing AI, but the thought of Microsoft or Google slurping up all my data to do remote-server AI is abhorrent. I can't see how Microsoft or Google can undo the last 10 years to fix this.

> "I trust Apple..."

I'm actually a little gobsmacked anyone on this forum can type those words without physically convulsing.

The even more terrible part is I'm sure it's common. And so via network externalities the rest of us who do NOT trust any of these companies on the basis that all of them, time and again, have shown themselves to be totally untrustworthy in all possible ways, will get locked into this lunacy. I now can't deal with the government without a smartphone controlled by either google or apple. No other choice. Because this utter insanity isn't being loudly called out, spat upon, and generally treated with the withering contempt that these companies have so richly and roundly earned this decision is being made for all society by the most naive among us.

I don't think the GP meant "trust" as in "I think Apple has my best interests at heart."

Rather, I think they meant "trust" as in "Apple is observably predictable and rational in how they work toward their own self-interest, rarely doing things for stupid reasons. And they have chosen to center their business on a long-term revenue strategy involving selling high-margin short-lifetime hardware — a strategy that only continues to work because of an extremely high level of brand-image they've built up; and which would be ruined instantly if they broke any of the fundamental brand promises they make. These two factors together mean that Apple have every reason to be incentivized to only say things if they're going to mean them and follow through on them."

There's also the much simpler kind of "trust" in the sense of "I trust them because they don't put me in situations where I need to trust them. They actively recuse themselves from opportunities to steal my data, designing architectures to not have places where that can happen." (Of course, the ideal version of this kind of trust would be a fully-open-source-hardware-and-software, work-in-the-open, signed-public-supply-chain-ledger kind of company. You don't get that from Apple, nor from any other bigcorp. Apple's software is proprietary... but at least it's in your hand where you can reverse-engineer it! Google's software is off in a cloud somewhere where nobody can audit changes to it.)

For me it's more "I think Apples business interests more closely align with my wishes as a customer" as opposed to any other megacorp.
At the heart of it: I feel like I'm Apple's customer in a way that I never feel like Google's customer (in everything they do it always seems like their real customers are Ad Buyers, even when you are ostensibly paying for services). (And Microsoft is in the middle and all over the map where some divisions treat you like a customer and others don't depending on the prevailing winds and the phase of the moon.)
What leads you to feel this way?

They are anti right to repair & they have their walled garden on their mobile devices. Their vertically integrated model also leads to unusually high prices. This website in particular would also directly feel the pain of Apple killing apps to implement themselves later, the greedy apple store cut and also not allowing use of hardware features that Apples themselves can use. Consumers feel this indirectly (higher prices, less competition).

Also, don't get it twisted, Apple is still collecting all of your data, even if you ask them not to [0].

0 - https://mashable.com/article/apple-data-privacy-collection-l...

There's absolutely several axes in play here. You have very different concerns than I do, and that's valid.

Their vertically integrated model leads to very good customer service. I don't pay extra for Apple Care and I still get treated like an adult if I show up to an Apple Store with some need.

Even when Apple makes a mistake and collects more data than they should, I don't expect that data to influence ads that I see or to be sold to the highest bidder. (As a developer myself, I find that I can be quite lenient on app internal telemetry.) I can also see that ad review is barely a small side hustle to them in their Quaterly Reports and I can also see that most of their ad revenue is from untargeted campaigns. (Microsoft is a bigger ad company than Apple. Google is an ad company deep into its DNA at this point with everything else a side hustle.)

There is a beauty to a well maintained walled garden. Royalty invested a lot of money into walled gardens and Apple maybe doesn't treat you exactly like royalty, but there's a lot of similar respect/dignity there in their treatment of customers, even if they want you to trust them not to touch the flowers or dig below the walls too much. They want you to have a good time. They want their garden to be like a Disney World, safer than the real world.

You may not appreciate those sorts of comforts and that's fine. Plenty of people prefer the beauty and comfort of a walled garden than the free-for-all of a public park (or the squatter's rights of an abandoned amusement park if you don't mind playing unpaid mechanic more often than not). There's a lot of subjective axes to evaluate all of this on.

I don't expect that data to influence ads that I see or to be sold to the highest bidder.

You should temper your expectations: https://gizmodo.com/apple-iphone-france-ads-fine-illegal-dat...

France’s data protection authority, CNIL, fined Apple €8 million ... for illegally harvesting iPhone owners’ data for targeted ads without proper consent.

If ads are a small percentage of Apple's quarterly reports, that isn't sound reasoning; after all, they make a lot of money in general in a lot of areas.

And if ads are leading to services growth (which they are), you should expect them to do more, scummier ad related things over time.

Fundamentally, ads is an incredibly high margin business with lots of room for growth (particularly when you own the platform and can handicap your competitors) so over time, all tech companies will become ads companies.

Holy shit that is so depressing...
Yeah that's pretty much how I feel as well.
> high-margin short-lifetime hardware

I don't think this applies to their watch or tablet business where the limiting factor on lifetime in the market is security/os updates. Most alternatives in that space have significantly worse support cycles.

This used to be true of their phones as well, but the android market seems to be catching up in ways that tablets/wearables have not (see google's 7 year commitment for pixels).

Not sure if it applies to general purpose. Certainly there are non mac computers that we can throw linux on and use for 10+ years and there are examples of apple laptops getting cut off earlier than I'd like (RIP my beloved 12" macbook), but there are often some pretty serious tradeoffs to machines older than 7 years anyway. Also, I'm not sure if apple's strategy re: support lifecycles on products after the AS migration have shifted. It wouldn't surprise me if the first gen m1 products get 10 years of security updates.

[flagged]
That's some nice casual racism you've got there. Actually it might be overt racism.
it's not that I blindly trust apple; it's more that they're the one FAANG company where I am the actual customer and their incentives align/depend on keeping me happy. Google/MS could care less how I feel; and I am well aware that I am most certainly not their customer.
> it's more that they're the one FAANG company where I am the actual customer and their incentives align/depend on keeping me happier

Do they though? Battery performance that 'lies' to you intentionally, planned obsolescence, locked in ecosystems, overtly undercutting the alternatives, marketing that hypes up rather bland features...I admit I don't see your point.

Apple, if anything, seem about as user hostile as Microsoft is these days.

> Battery performance that 'lies' to you intentionally, planned obsolescence ...

Everything is relative. Apple generally supports their devices with OS updates for longer than most Android phone makers. Their incentives here are well aligned: they get a decent profit from Apple Store no matter how long you use their phone.

I think a lot of the reporting on Apples actions is very click-baity and lack nuance. Take the case when Apple throttled CPU performance of their phones when the battery got old and degraded. It was reported as a case of planned obsolescence, but it was in fact the exact opposite: by limiting the power consumption of the CPU they avoided unexpected phone shutdown due to battery voltage going too low during bursts of high power consumption. A phone that randomly shuts down is borderline use-less. A phone that is slower can at least be used for a while longer. Apple didn't have to do this. They would have spent less R&D money, and had a much lower chance of bad PR backlash, if they just simply did nothing. Yet they did something to keep old phones useful for longer.

> locked in ecosystems

That's a fine balance. Creating a good ecosystem is part of what makes Apple so user friendly. And it's a lot harder to create open ecosystems than having closed ones. Especially when you factor in security and reliability. If Apple diverted resources to making their ecosystems more open I think their ecosystem integration would have been significantly worse, which would have made them lose the thing most users considers Apples primary advantage.

Apple is a mixed bag. They were one of the first to go all-in on USB-C. Sometimes they push aggressively for new open standards that improve user experience. Yet they held on to Lightning for far too long on their phones. But here you get back to the planned obsolescence factor: there's a HUGE amount of perfectly fine Lightning accessories out there that people/companies are using with iPhones. If they killed Lightning too fast I can guarantee you they would have gotten a lot of hate from people who couldn't use their Lightning accessory anymore. With laptops that wasn't a big issue. Adapters are significantly less convenient to use with phone accessories.

Apple is tinker-hostile, but they’re great at getting-things-done for the majority of people. It’s frustrating when you have the knowledge to build custom workflows, but the happy path and the guardrails work great for many.

Microsoft have no consistency and Google wants you to pray at the altar of advertising.

Shouldn't Microsoft be somewhere between Google and Apple ? After all, they do rely on you buying their software in a way Google does not.
Who do you think is Netflix's customer if not you?
  • tl
  • ·
  • 1 week ago
  • ·
  • [ - ]
> I'm actually a little gobsmacked anyone on this forum can type those words without physically convulsing.

Apple tells a pretty compelling lie here. Rather than execute logic on a server whose behavior can change moment to moment, it executes on a device you "own" with a "knowable" version of its software. And you can absolutely determine no network traffic occurs during the execution of the features from things announced this week and going back a decade.

The part that Apple also uploads your personal information to their servers on separate intervals both powering their internal analytics and providing training data is also known, and for the most part, completely lost on people.

Are you claiming Apple uses personal user data (e.g someone’s photos or texts) as training data for their server-side models? That’s a massive claim and there are some journalists you should definitely shoot a message to on signal if you have proof of that and aren’t just blowing smoke.
  • tl
  • ·
  • 1 week ago
  • ·
  • [ - ]
Apple's claim (per public statements) is:

- They upload your data to their servers. This is a requirement of iCloud and several non-iCloud systems like Maps.

- Where analytics is concerned, data is anonymized. They give examples of how they do this like by adding noise to the start and end of map routes.

- Where training is concerned, data is limited to purchased data (photos) and opted-in parties (health research).

My point is that Apple's code executing on device can be verified to execute on device. That concept does not require trust. Where servers are involved and Apple does admit their use in some cases, you trust them (as much as you trust Google) their statements are both perfectly true and ageless. Apple transitions seamlessly between two true concepts with wildly different implications.

Apple's marketing and branding is truly impressive when even Hackernews crowd, who'd you assume are very tech savvy, are eating it up all of the propaganda.
“Wow, so many of these people neck deep into tech, privacy, and law disagree with me. It must be because they’re all suckers.”
Despite your snark - you'll never win an argument where trusting a for profit corporation is some sort of win over transparent secure system. Yes Apple might be better of the lesser evils but is this really where us as a privileged class of people who actually understand all this give up and give in? This is sad.
You’ll also never win an argument when assuming that the people you disagree with haven’t thought this through and come up with a different conclusion. The snark is from the implication that we’re either clueless or blind to it. The more likely explanation is that we have different priorities, and that we’re viewing the question from a different angle. That doesn’t make us ignorant or unprincipled, any more than you disagreeing with me makes you naive or unserious.

We have different ideas. That’s all. There’s no need to look down on each other for it.

There’s not really any transparent secure system that competes with Apple.
Yeah, at this point, for me, it’s “use Apple stuff” or “barely use computers in my personal life”. I did the Linux and (later) Android tinkering thing for a good long while, and I’m over it. Losing all the features and automation and integration I get with no time lost, for a bunch of time consuming and janky DIY that still wouldn’t get me all of it, isn’t something I’ll do these days. I’d just avoid computers.
I’ve been diagnosed with ADHD. I lost so much time to screwing around with a million config options. Recompiling with slightly different flags to make things 2% faster. Keeping my system bleeding-edge up to date. All that nonsense was fun but it was a way to avoid getting started on what I was suppose to be doing.

Going back to a Linux desktop would be the end of me. I know it.

Same. Aside from a few self hosted services, I use Apple for my phone and work machines because I just want it to fucking work all the time. My parents understand that if they want IT support from me that they must use a Mac or iPhone - because then I rarely have to help them with anything.

The one exception I made recently was to dump Windows and move to a Fedora immutable build after seeing how capable the Steam Deck (and Linux) was for all the games I play. I’ll get shot of that if it causes me grief though - or just stop playing games on my PC.

I just don’t have the energy to mess about with it all these days and Apple is the 2nd best option in lieu of that.

  • ikety
  • ·
  • 1 week ago
  • ·
  • [ - ]
Tried that route, and while it is quite viable in 2024, I reverted to dual booting. I can turn on my computer to work or to play games, and those two environments are completely separate. Nvidia terrible corporation, but similar with Apple in the #justworks category. Getting much better on linux recently, but you lose stuff like the new hdr feature, and occasionally have to worry about anticheat
Yeah, I’m lucky in that I’m pretty stable now in which games I’m playing and they all run flawlessly via Proton or Lutris. It seems to mostly be the controversial kernel-level anticheats which don’t play ball or where devs don’t enable the Linux support flags for things like EAC which cause a problem. For those, I kinda just have to suck it up and play something else.

I did have a ton of issues with NVidia in the same environment, but after putting a Radeon card in it has been smooth sailing. That’s to be expected I guess.

I doubt I would have tolerated this even a few years ago though and would have ended up like yourself with a dual boot setup.

Define "competes". Sent from my GNU/Linux phone Librem 5.
Uh, friend, this is still just an internet technology enthusiast forum. Popular opinion here is equally as reliable as Reddit. If you are taking hn upvotes as some kind of expert input, you're in for a rough time.
No argument from me. I was replying to someone who couldn't believe the readers here, "who'd you assume are very tech savvy", didn't agree with their opinions.
  • jrm4
  • ·
  • 1 week ago
  • ·
  • [ - ]
"Wow, so many of these people disagree with me, it might be because they have a huge dangerous blind spot because of a lack of knowledge and/or experience and/or have trouble seeing things from the outside"

...is a thing I experience on a regular basis (and that I only really gained confidence in once I actually saw the mistakes cause problems, e.g. password managers)

I would give multiple upvotes to this, were it possible.

I do not have either Google or Apple accounts and I do not intend to ever open such accounts (despite owning some Android smartphones and having owned Apple laptops).

Because of this, I am frequently harassed by various companies or agencies, which change their interaction metods into smartphone apps and then deprecate the alternatives.

Moreover, I actually would be willing to install such apps, but only if there would be some means to download them, but most of them insist on providing the app only in the official store, from which I cannot install it, because I have no Google account.

I have been forced to close my accounts in one of the banks that I use, because after using their online banking system in the browser for more than a decade, including from my smartphone, they have decided to have a custom app.

In the beginning that did not matter, but then they have terminated their Web server for online banking and they have refused to provide directly their app, leaving the Google store as the only source of it.

I have been too busy to try to fight this legally, but I cannot believe that their methods do not break any law. I am not an US citizen, I live in the European Union, and when an European bank (a Societe Generale subsidiary) refuses to provide its services to anyone who does not enter in a contractual relationship with a foreign US company, such discrimination cannot be legal.

I sympathize with the plight, as I have also occasionally tried to fight this fight.

However, to quibble with your last analysis, you're almost certainly entering an agreement with the EU registered legal entity of a multinational company, and you almost certainly already had to do that to obtain the hardware, run the OS, use the browser, etc. The degree to which any of those contracts are enforceable is another matter.

Even if Google were treated as a local company, that does not change anything.

I find unbelievable that a bank has the arrogance of conditioning their services on whether their customers accept to do business or not with some third party.

I see no difference between the condition of having a Google account and for instance a condition that I should buy my car from Audi or from any other designated company, instead of from wherever I want. It is none of my bank's business what products or services I choose to buy or use (outside of special circumstances like when receiving bank credits).

Could you provide an alternative model where you get what you want, that is economically viable for vendors and manufacturers to invest in, and that does not require me to teach my parents how to sysadmin their phones to keep them safe?

I trust Apple more than I trust Google to not share my data with a large group of corporate entities who want to sell me things I do not wish to buy.

I believe both - and if required, organizations like Mozilla, Ubuntu, Redhat/Oracle, whoever - to comply with law enforcement requests made of them to hand over any data relating to me that they might hold. I'm OK with that. I think Apple has less of that data than Google, and works actively to have less of it. Google works actively to increase the amount of data they have about me.

I think even if you had a functional device using entirely open software, that any organisation you share that data with or use to communicate with using that device - including cloud service providers, network providers, and so on - would also comply with law enforcement.

"Ah!", you say, "But I get to choose which crypto to use! I know it won't have backdoors!". To which I will reply you are unlikely to have read and truly understood the source code to the crypto software you're using, and that such software is regularly shown to have security issues. It's just not true that open source means that all bugs become shallow, and the "many eyes" you're hoping for to surface these issues are likely employed at, err, Apple, Google, Redhat, Ubuntu, Mozilla...

I look at the landscape and I conclude that true open source environments have a ton of issues, Google/Android have far more (for my taste), and that I am more confident in Apple than I am in either myself (even as an experienced tech expert), or Google, or Microsoft, to keep my data private to me to the greatest extent legally permissible.

Do I think "legally permissible" should be extended? Sure. Do I wish a multi-billionaire would throw 50% of the net worth at making open source compete on the same level? Yeah, cool. Do I think any of that is realistic in the next 5 years? No. So, I make my bets accordingly, eyes wide open, balancing the risks...

Do you have any examples of Apple being untrustworthy to back up your rather extreme reaction?
You should remember that in December 2023 it was revealed that the "Apple Silicon" CPUs have some undocumented testing features, which have unbelievably remained enabled in the Apple devices for many years until being notified by the bug finders, instead of being disabled at the end of production.

Using the undocumented but accessible control registers, all the memory protections of the Apple devices could be bypassed. Using this hardware backdoor, together with some software bugs in the Apple system libraries and applications, for many years, until the end of 2023, it has been possible to remotely take complete control of any iPhone, with access to its storage and control of the camera and microphone, in such a way that it was almost impossible for the owner to discover this (the backdoor bugs have been discovered only as a consequence of analyzing some suspicious Internet traffic of some iPhones that were monitored by external firewalls).

It is hard to explain such a trivial security error as not disabling a testing backdoor after production, for a company that has claimed publicly for so long that they take the security of their customers very seriously and that has provided a lot of security theater features, like a separate undocumented security processor, while failing to observe the most elementary security rules.

It is possible that the backdoor was intentional, either inserted with the knowledge of the management at the request of some TLA, or by a rogue Apple employee who was a mole of such a TLA, but these alternative explanations are even worse for Apple than the explanation based on negligence.

I don't think this demonstrates untrustworthiness.
Forgot the "screeching minority" who values privacy quote already?

https://www.howtogeek.com/746588/apple-discusses-screeching-...

  • sho
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> all of them, time and again, have shown themselves to be totally untrustworthy in all possible ways

Sorry, but this seems like a very vague claim to me. Can you specifically point out a time where Apple proved itself untrustworthy in a way that impacts personal privacy?

When Apple says they treat my data in a specific way, then yes I do trust them. This promise is pretty central to my usage of them as a company. I'd change my mind if there was evidence to suggest they're lying, or have betrayed that trust, but I haven't seen any, and your post doesn't provide any either.

It depends on what you'd consider "untrustworthy", but some (myself included) feel it's hypocritical for Apple to position itself as a privacy conscious choice, and use its marketing / PR machine to give the impression it only makes money on devices/subscriptions, when they're silently managing an ads-funded cash cow, with billions of dollars that go directly to the bottom line, as pure profit.

Here's a few pointers, to get you up to speed [1-5]. Of course there's nothing wrong with monetizing their own user base and selling ads based on their 1PD (or, in the case of Safari, monetizing the search engine placement). But I find it ironic that they make a ton of money by selling ads based on the exact same practices they demonize others for -- user behavior, contextual, location, profile.

[1] https://searchads.apple.com/

[2] Apple’s expanding ad ambitions: A closer look at its journey toward a comprehensive ad tech stack - https://digiday.com/media-buying/apples-expanding-ad-ambitio...

[3] Apple’s Ad Network Is The Biggest Beneficiary Of Apple’s New Marketing Rules: Report -- https://www.forbes.com/sites/johnkoetsier/2021/10/19/apples-...

[4] Apple Privacy Suits Claim App Changes Were Guise to Boost Ad Revenue - https://www.hollywoodreporter.com/business/business-news/app...

[5] Apple is becoming an ad company despite privacy claims - https://proton.me/blog/apple-ad-company

> they're silently managing an ads-funded cash cow, with billions of dollars that go directly to the bottom line, as pure profit

Advertising isn't anti-privacy. Apple's fight was with tracking by third parties without user knowledge or consent. That is independent of, but often used for, advertising purposes.

This is different from say Google determining ads on Youtube based on what you are watching on Youtube.com, and from Amazon or Apple promoting products based on your product searches solely within their respective stores.

> Advertising isn't anti-privacy.

Advertising works much better when there is no privacy.

Tracking-based Ad targeting is blip in the history of advertising and goes against previous decades of "common sense" in advertising that the best ads cast the widest net and catch the eye of people you (and they) don't even know are potential targets.

I hope this current fad dies and people return to that older marketing "common sense". Over-targeting is bad for consumers and bad for advertisers, the only people truly benefiting seem to be Google and Meta.

Your truism doesn’t refute their point.
The fact that Advanced Data Protection on iCloud wasn't forced is sus.
As someone who has to help my father with his personal tech as his mental health deteriorates (several brain tumors), I'm thrilled every time I find something that ISN'T locked down behind pin codes, passwords or other authentication methods that he no longer remembers or can communicate.

His current state really has made me think about my own tech, about what should be locked down and what really should not be - things that we lock down out of habit (or by force) rather than out of necessity.

Given the rate at which the elderly find themselves swindled out of money due to scams, hacks or any other method of invasion, I really don’t think loosening controls makes the most sense.

Might be interesting if companies offered the ability for someone to be a “steward” over another when it came to sensitive choices (like allowing new logins, sending money, etc). Of course that itself is a minefield of issues with family members themselves taking advantage of their elderly members. But maybe power of attorney would have to be granted?

What I hinted at was more granularity in how we treat different types of data, or other accesses, in response to the idea of being forced to turn on "Advanced Data Protection on iCloud".

Rather than putting all of our personal data and accesses under a thick virtual fire blanket, perhaps it is perfectly fine if some of it isn't protected at all, or is protected in ways that could be easily circumvented with just a tiny bit of finagling.

This is now how I'm approaching my own digital foot print, that some not secret things are nowadays wide open, unencrypted and you just need to know where to look to access all of it.

Relatedly, I think a lot of us under-estimate/under-appreciate physical security in our threat models. A desktop tower that never leaves my house and would be a pain for anyone but a dedicated burglar to steal maybe doesn't need the same sort of security/encryption/authentication requirements for physical access in person that a phone or laptop might need. Certainly there are plenty of fears of people targeting me specifically and getting physical access to my house, but there are also more legal protections from some of those. Threat models are all about trade-offs and physical security/physical access restrictions trade-offs can be under-appreciated as places to make choices that can be in your favor.
I understand what you mean but I think maybe your example wasn’t terrific given I think the elderly are actually frequent and vulnerable targets for crims. I’ve actually had scenarios where my parents were unable to log into an account and when I asked why they needed to, it was to give some “support specialist” information they were asking for. Is it a pain in the ass to help your parents install a mobile app sometimes? Yeah I guess. I’m just glad someone didn’t drain their bank account on them.

There is sometimes a point to inconvenience in that it requires time and assessment.

  • seec
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yeah, the thing about "security" is that there is a lot more chance that it will come to bite you in the ass later down the road than being successful (actually prevent an issue). I have some funny stories about unrecoverable drives because of forgotten encryption keys.

For most people the only security they need is actually access to their money, everything else is mostly irrelevant, nobody really cares about weird habits or whatever.

  • sho
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Not when you understand the tradeoffs being made. If you enable Advanced Data Protection and lose or forget your password, Apple cannot help you recover it. It makes sense that it's opt-in and users make a conscious choice to make that trade-off.
Have you ever done tech support?
Yeah, you’re right. Apple’s approach to privacy is like one of those fairytale genies. On paper, and in many technical aspects, class-leading, but useless because anyone powerful and/or determined enough to hurt you will be able to use the backdoors that they willingly provide.

End to end encryption? Sure, but we’re sending your location and metadata in unencrypted packets.

Don’t want governments to surveil your images? Sure, they can’t see the images - but they’ll send us hashes of illegal images, and we’ll turn your images into hashes, check them against each other, and report you to them if we find enough.

Apple essentially sells unbreakable locked doors while being very careful to keep a few windows open. They are a key PRISM member and have obligations under U.S. law that they will fulfil. Encryption backdoors aren’t needed when the systems that they work within can be designed to provide backdoors.

I fully expect that Apple Intelligence will have similar system defects that won’t be covered properly, and will go forgotten until some dissident gets killed and we wonder why.

For a look at their PR finesse in tricking media, see this, over the CSAM fiasco that has been resolved, in Apple’s favour.

https://sneak.berlin/20230115/macos-scans-your-local-files-n...

> Sure, they can’t see the images - but they’ll send us hashes of illegal images, and we’ll turn your images into hashes, check them against each other, and report you to them if we find enough.

> I fully expect that Apple Intelligence will have similar system defects

Being able to scan devices for CSAM at scale is a "defect" to you?

Yes, it is a defect. For many reasons

- it's anti-user: a device spying on you and reporting back to a centralized server is a bad look

- it's a slippery slope: talking about releasing this caused them to get requests from governments to consider including "dissident" information

- it's prone to abuse: within days, the hashing mechanism they were proposing was reverse engineered and false positives were embedded in innocent images

- it assumes guilt across the population: what happened to innocent by default?

and yes, csam is a huge problem. And btw, apple DOES currently scan for it- if you share an album (and thus decrypt it), it is scanned for CSAM.

Yeah but Google and MS have the same problems.. What your talking about is the reality of using a computer connected to the internet since 2003.
  • seec
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
But they don't bullshit about it as much and their offerings are much cheaper and it's easier to not have to pay as much (either with data or money).

There is just a general hypocrisy about Apple that is hilarious.

This is true, but your examples aren't directly trying to pretend they are the better alternatives for that. Apple is doing its best to paint itself as some golden company when reality dictates they are no better (if honestly worse in some categories).
  • jajko
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Don't expect balanced objective opinions on Apple on HN, that was never ever the case. Some of it are tech enthusiasts, some are maybe employees or investors, some is paid PR.

Nothing wrong there per se, its just good to realize it.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What government needs you to have a smartphone from Apple or Google?
The Australian Government required you to have an app called MyGovID to do you business taxes and other administrative tasks. This app is only Apple or Android, there is no web interface.
That’s crazy. Why no web interface? In Poland for taxes we have a web interface that is mobile and stationary, and for many other things it’s a choice between an app and web and paper.
The tax part is all web, it just mandatory 2 factor authentication to login that requires the app.
Ahl. In our case, we have mobile app 2fa, but also an sms 2fa, and authentication through bank login - it’s quite neat that the government struck deal with a bunch of banks, and they serve as identity providers too.
Not a requirement, but in Poland a ton of administration things can be done from a dedicated iphone/android app - including using your official ID. It is optional though, and you can alway do the same stuff (ID aside) from the web, or using paper and going places in person.
> Because this utter insanity isn't being loudly called out, spat upon, and generally treated with the withering contempt that these companies have so richly and roundly earned this decision is being made for all society by the most naive among us.

Ah yes, blame the simple-minded plebes who foolishly cast their noses up at Windows Phone. If only Ballmer were still in charge, surely he'd have saved us from this horrible future of personal, privacy-respecting AI at the edge...

Have to agree, apple seems to put a really strong emphasis above all else on your shit is your shit and we don't want to see it.
  • m463
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
But this is not true. that's the thing.

Apple is very intrusive. Macos phones home all the time. ios gives you zero control (all apps have internet access by default, and you cannot stop it)

Apple uses your data. you should be able to say no.

And as for your data, they do other things too, a different way. Everything goes to icloud by default. I've gotten new devices and boom, it's uploading everything to icloud.

I've seem privacy minded parents say no, but then they get their kid an iphone and all of their stuff goes to icloud.

I think apple should allow a personal you-have-all-your-data icloud.

> Apple is very intrusive. Macos phones home all the time.

The platform is heavily internet-integrated, and I would expect it to periodically hit Apple servers. There are a lot of people claiming to be security researchers reporting what Little Snitch told them. There are drastically fewer who would introspect packets and look for any gathered telemetry.

I really haven't seen evidence Apple is abusing their position here.

> Everything goes to icloud by default. I've gotten new devices and boom, it's uploading everything to iCloud.

You need to enable iCloud. You are prompted.

Also, a new device should have next to nothing to upload to iCloud, as its hard disk is still in the factory configuration.

> I think apple should allow a personal you-have-all-your-data iCloud

They have desktop backup. Maybe they should allow third party backup Apps on iPhone, although I suspect data would be encrypted and blinded to prevent abuses by third parties, and recovery would be challenging because today recovery is only possible on a known-state filesystem. The recovery aspect is what really has limited it to the handful of approaches implemented directly by Apple.

A key difference is that Apple isn’t then selling the info it has on you to advertisers.

I don’t think any large tech company is morally good, but I trust Apple the most out of the big ones to not do anything nefarious with my info.

None of the tech companies are selling your data to advertisers. They allow advertisers to target people based on the data, but the data itself is never sold. And it would be dumb to sell it because selling targeted ads is a lot more valuable than selling data.

Just about everyone else other than the tech companies are actually selling your data to various brokers, from the DMV to the cellphone companies.

> None of the tech companies are selling your data to advertisers.

First-hand account from me that this is not factual at all.

I worked at a major media buyer agency “big 5” in advanced analytics; we were a team of 5-10 data scientists. We got a firehose on behalf of our client, a major movie studio, of search of their titles by zip code from “G”.

On top of that we had clean roomed audience data from “F” of viewers of the ads/trailers who also viewed ads on their set top boxes.

I can go on and on, and yeah, we didn’t see “Joe Smith” level of granularity, it was at Zip code levels, but to say FAANG doesn’t sell user data is naive at best.

> we didn’t see “Joe Smith” level of granularity, it was at Zip code levels

So you got aggregated analytics instead of data about individual users.

Meanwhile other companies are selling your name, phone number, address history, people you are affiliated with, detailed location history, etc.

Which one would you say is "selling user data"?

They absolutely are. And they give it to governments upon request.

Their privacy stories are marketing first.

I don't think they sell it like Google or Samsung. For example Apple does not have a location intelligence team dedicated to driving revenue for store brands or targeting users that go there using precise geo location data.

Google and Samsung do.

Give me a source that they are selling your data, not targeted ads.
I _trust_ Google to attempt to do so, and fail sadly along the way…

They went from “Don’t be evil” to a cartoonish “Doctor Evil” character in a decade.

> And they give it to governments upon request.

So in other words, "companies operating within a nation are expected to abide by the laws of that nation"?

Apple structures their systems to limit the data they can turn over by request, and documents what data they do turn over. What else do you believe they should be doing?

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Actually under US rule of law you don’t just turn over things upon request.

Much like every other tech company you test the request.

Apple never does.

> Apple never does.

Citation needed?

They are selling data to advertisers? I would like to know more about that.
Google isn't. They are the advertising engine and sell to advertisers for reach, just like Facebook does.

I trust Apple about as far as I can throw them too. They are inherently anti-consumer rights everywhere in their ecosystem. The "Privacy" angle is just PR.

I would say it is PR as much as it is a strategic differentiation. Their business model is too sell products and services directly to consumers. This is different from Microsoft who is selling to businesses who need data protection, but actually want to be able to monitor their employees and Google who wants to leverage your data / behavior to allow advertisers to effectively target you with ads.

None of the big companies expressly sell your information. Not because they are altruistic, but because it is an asset that they want to protect so they can rent to the next person.

Yo Apple is an ad company as well now. They do both.
They all do a little bit of everything. Google sells devices too, but they are not predominantly a physical device company.
At 3 trillion Dollars of market cap they are a capitalistic hellscape and do everything in their power to benefit their own interests, which are not yours.
It’s amazing how many here don’t understand that.
There are several very nuanced comments much farther up this chain who clearly do understand that, and lay out their informed reasoning for why they have chosen to use Apple devices for themselves. It’s amazing how many here seem to have ignored them.
Anyone who disagrees with you about this should buy a Mac and try not enabling iCloud. There's constant nags and as far as I could find, no way to turn them off.
1) Have you tried installing Linux? ;-)

2) I have booted macOS VMs without iCloud. I'm not sure of the nags though. I believe signing out of iCloud will prevent iCloud from contacting Apple.

https://support.apple.com/en-us/104958

  • m463
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
1) yes:)

2) that is entirely NOT true. You should install little snitch and see what happens even if you NEVER sign into icloud. note that the phone home contact is not immediate, it happens in the background at random intervals from random applications.

just some random services blocked by little snitch on a mac:

accountsd, adprivacyd, airportd, AMPLibraryAgent, appstoreagent, apsd, AssetCacheLocatorService.xpc, cloudd, com.apple.geod.xpc, com.apple.Safari.SafeBrowsing.Service, commerce, configd, familycircled, mapspushd, nsurlsessiond, ocspd, rapportd, remindd, Safari, sntp, softwareupdated, Spotlight, sutdentd, syspolicyd, touristd, transparencyd, trustd, X11.bin

(never signed into an apple id)

Tell me more about how dastardly it is that Safari communicates with Apple servers. Type it from your browser that doesn’t communicate directly with its developers.
Judging from a lot of these comments, most of the folks here are reading/commenting via telnet.
I’ve never used iCloud since it came out. I can’t think of a single nag. Where do you see it on your iPhone or Mac?
There's several of them. The most annoying for me was getting intermittent notifications to sign in to iCloud.

There's also this one: https://discussions.apple.com/thread/250727947

I eventually just gave in to stop the nags.

I have an iPad (not iPhone or Mac). If you don't set up Icloud, there's always an annoying bright red circle in settings that tells you to "finish setting up your iPad".

Doesn't have to be bright red, or even there at all.

Last time I had that on a laptop I was going to wipe soon afterward and didn’t want to fully set up, I clicked the “finish setting up” link and canceled out. Voila, red circle gone.
Gone until a few days or a week later, when it comes back.
Yes, that’s the only one I’ve seen. But it’s not much of a nag.
> all apps have internet access by default, and you cannot stop it

Technically you can by turning off wi-fi and disabling cellular data, bluetooth, location services, etc. for the app.

To your point though, wi-fi data should also be a per-app setting, and it is an annoying omission. macOS has outgoing firewalls, but iOS does not (though you could perhaps fake it with a VPN.)

  • krrrh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Apple is very intrusive

> Apple uses your data.

> they do other things too, a different way

What specifically do you mean? Their frankly quite paranoid security and privacy white papers are pretty comprehensive and I don’t think they could afford to lie in those.

> Apple should allow a personal you-have-all-your-data iCloud

Advanced Data Protection[0] applies e2ee for basically everything, with the exception email, and doesn’t degrade the seamless multi-device experience at all. For most people this is the best privacy option by a long shot, and no other major platform can provide anything close.

They’ve hampered product experience for a long time because of their allergy against modelling their customers on the cloud. The advent of AI seems to have caught them a bit off guard but the integrated ecosystem and focus on on-device processing looks like it may pay off, and Siri won’t feel 5 years behind Google Assistant or Alexa.

[0] https://support.apple.com/en-ca/102651

> What specifically do you mean? Their frankly quite paranoid security and privacy white papers are pretty comprehensive and I don’t think they could afford to lie in those.

A couple of years ago Apple was busted when it was discovered that most Apple first-party apps weren't getting picked up by packet sniffer or firewalls on macOS.

Apple tried deflecting for a while before finally offering up the flimsy claim that it "was necessary to make updates easier". Which isn't a really good explanation when you're wondering why TextEdit.app needs a kernel network extension.

What actually happened was Apple removed support for kernel extensions that these firewall apps used.

The user-mode replacement APIs allowed by sandboxed apps had a whitelist for Apple's apps, so you couldn't install some App Store firewall app that would then disable the App Store and screw everything up.

After the outrage, in a point release a few months later, they silently emptied out the whitelist, resolving the issue.

They never issued any kind of statement.

So their "fix", as described here, removed protection from "having the App Store disabled and everything screwed up"?

That makes no sense.

Even if it did, the app the would need protection is the App Store, not every single Apple app. In many cases, the fix for the worst case scenario would be "remove firewall app".

Also, given that TextEdit was not an AppStore app, for but one example, but a base image app.

> They never issued any kind of statement.

Shocking. I've had at least two MBPs affected by different issues that were later subject to recall, but no statement there. radar.apple.com may well be read by someone, but is largely considered a black hole.

The lack of an iOS setting to deny specific apps network access is absurd. It doesn't feel like much of a privacy-focused platform when every day in my network logs I see hundreds of attempted connections from 'offline' iOS apps.
For what it's worth, those platform investments are the difference between Apple being applauded for this, and Microsoft being pilloried for Recall's deficiencies.
> I trust Apple to respect my privacy when doing AI...

Depends on where you are. Apple will bend over backwards when profits are affected, as you can see in China.

Ironically, the only time a large company took a stand at the cost of profits was in 2010 when Google pulled out of China over hacking and subsequently refused to censor. Google has changed since then, but that was the high watermark for corporates putting principles over profits. Apple, no.

> Google pulled out of China over hacking and subsequently refused to censor

My impression is that they had little chance to survive in a Chinese market, competing with a severely limited product against state-sponsored search products while also being a victim of state-sponsored cyberattacks.

It was the morally correct decision, but I don't know if they were leaving any money on the table doing so. I suspect the Google of today would also decide not to shovel cash into an incinerator.

When enrolling physical security keys to my accounts, only Google's process requested extra, identifiable fields in my key, generating a warning in Firefox, which can anonymize these fields.

Google wants to track even my physical security key across sites to track me.

How can I trust their AI systems with my data?

The attestation on (FIDO certified) security keys is a batch attestation, and meant to correspond to a a batch size of at least 100,000.

So they were effectively asking for the make and model.

There are non-certified authenticators which may have unfortunate behaviors here, such as having attestations containing a hardware serial number. Some browsers maintain a list and will simply block attestations from these authenticators. Some will prompt no matter what.

There is also a bit of an 'Open Web' philosophy at play here - websites often do not have a reason to make security decisions around the make and models of keys. Having an additional prompt in a user conversion path discourages asking for information they don't need, particularly information which could be used to give some users a worse experience and some vendors a strong 'first-mover' market advantage.

In fact, the motivator for asking for this attestation is often for self-service account management. If I have two entries for registered credentials, it is nice if I have some way to differentiate them, such as knowing one of them is a Yubico 5Ci while the other is an iPhone.

Many parties (including Google) seem to have moved to using an AAGUID lookup table to populate this screen in order to avoid the attestation prompt. It also winds up being more reliable, as software authenticators typically do not provide attestations today.

Both devices are Yubikey 5 series, and none of the other services asked for anything similar, or triggered any warnings.

Moreover, none of the service providers auto-named my keys with make/model, etc.

> If I have two entries for registered credentials, it is nice if I have some way to differentiate them, such as knowing one of them is a Yubico 5Ci while the other is an iPhone.

First, Google doesn't differentiate the security keys' name even if you allow for that data to be read, plus you can always rename your keys to anything you want, at any of the service providers I enrolled my keys, so it doesn't make sense.

Moreover, Firefox didn't warn me for any other services which I enrolled my keys, and none of them are small providers by any means.

So, it doesn't add up much.

  • 4m1rk
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Google is not trusted because it was an AI company and needed your data. Apple just joined the club.
[flagged]
That might be "trust" in a relative sense
[flagged]
Since Apple is building ChatGPT integration into its devices, it’s clear that Apple’s users’ data is going to be slurped by Microsoft via ClosedAI servers now.

It’s unlikely latency would permit them to proxy every request to fully mask end-user IPs (it’s unclear what “obscured” means), and they would probably include device identifiers and let Microsoft maintain your shadow profile if that could improve ChatGPT output (it may not require literally storing your every request, so denying that is weasel phrasing).

I’ve been browsing with Private Relay since the day it became available. What’s this intolerable latency you’re talking about?
Browsing is not the same as using a personal assistant.

First, it takes much less compute to serve a page than to run an LLM query. LLMs are slow even if you eliminate all network.

Second, your expectations when browsing are not the same as when using a personal assistant.

Right now even when I simply ask Siri to set a timer it takes more than a couple of seconds. Add an actual GPT in the mix and it’s laughable.

In any case, even with a private relay, Apple’s phrasing does not deny sending device identifiers and allowing ClosedAI/Microsoft to build your shadow profile (without storing requests verbatim).

Nope, you’re moving the goalposts. You were talking about the latency of making a network call. I pointed out that Apple’s current proxying architecture has low latency for web browsing, with orders of magnitude larger requests moving through it. We’re not going to bring GPT slowness into the mix because that’s not what we were discussing.
No, I meant the cumulative latency that increases with every hop. You can’t fool physics. Not proxying is just faster and in case of an already super-slow server these seconds matter to any UX designer worth their salt.
Ironically, I feel like Apple might have lost me as a customer today. It won't matter to Apple, obviously, but so much of what they showed today I just felt was actively pushing me out of the ecosystem.

I first bought some devices for myself, then those devices got handed off to family when I upgraded, and now we're at a point where we still use all of the devices we bought to date - but the arbitrary obsolescence hammer came down fairly hard today with the intel cut-off and the iPhone 15+ requirement for the AI features. This isn't new for Apple, they've been aging perfectly usable devices out of support for years. We'll be fine for now, but patch support is only partial for devices on less-than-latest major releases so I likely need to replace a lot of stuff in the next couple of years and it would be way too expensive to do this whole thing again. I'll also really begrudge doing it, as the devices we have suit us just fine.

Some of it I can live without (most of the AI features they showed today), but for the parts that are sending off to the cloud anyway it just feels really hard to pretend it's anything other than trying to force upgrades people would be happy without. OCLP has done a good job for a couple of homework Macs, I might see about Windows licenses for those when they finally stop getting patches.

I'd feel worse for anyone that bought the Intel Mac Pro last year before it got taken off sale (although I'm not sure how many did). That's got to really feel like a kick in the teeth given the price of those things.

From rumours of Apple buying lots of GPUs from Nvidia not that long ago I think management got a nice little scare when OpenAI released GPT-3.5 and then GPT-4. It takes several years to bring a CPU to market. Apple probably realised far too late that they needed specific features in their SOCs to handle the new AI stuff, so it wasn’t included in anything before the A17 Pro. For the M1, M2 and M3 I believe that Apple is willing to sacrifice heat and battery to achieve what they want. The A17 Pro is probably very efficient at running LLMs so it can do so in a phone with a small battery and terrible thermal performance. For their Macs and iPads with M1, M2, M3 they will just run the LLMs on the AMX or the GPU cores and use more power and produce more heat.

Could also be a memory problem. The A17 Pro in the iPhone 15 Pro comes with 8 GB of memory while everything before that has 6 GB or less. All machines with the M1 or newer come with at least 8 GB of memory.

PS: The people who bought the Intel Mac Pro after the M1 was released knew very well what they were getting into.

It worth noting that the power of the neural engine doubled between the A16 and A17 chips (17 vs 35 TOPS, according to Wikipedia), while the A15 to A16 was a much more modest increase (15.8 to 17 TOPS). So it does seem like they started prioritizing AI/ML performance with the A17 design.
  • jwr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Apple started including the neural engine back with A11 Bionic. In 2017.
And at .6 TOP/second of performance, that Neural Engine is practically useless today. You can go buy a $50 Rockchip board with an NPU 10x faster.

Which introduces a funny aspect, of the whole NPU/TPU thing. There's a constant stairstepping in capability; the newer models improving only obsoletes older ones faster. It's a bit of a design paradox.

  • jwr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yes. But I was responding to "Apple probably realised far too late". I think they were in fact way ahead of everyone else, it's just that the hardware of 2017 can't keep up with the demands of today.
It was specifically the LLM stuff. Their neural engines were never designed for running LLMs. The question is if the new neural engine in the A17 Pro and M4 actually have the required features to run LLMs or not. That’s at least what I suspect.
> I think they were in fact way ahead of everyone else,

This would be a lot easier to argue if they hadn't gimped their Neural Engine by only allowing it to run CoreML models. Nobody in the industry uses or cares about CoreML, even now. Back then, in 2017, it was still underpowered hardware that would obviously be outshined by a GPU compute shader.

I think Apple would be ahead of everyone else if they did the same thing Nvidia did by combining their Neural Engine and GPU, then tying it together with a composition layer. Instead they have a bunch of disconnected software and hardware libraries; you really can't blame anyone for trying to avoid iOS as an AI client.

I'm genuinely wondering why the NU was added in first place, I can't think of any app that made egregious use of that outside of the Gallery and Photo App. They didn't even allow any access from third parties in a few initial iterations.
On-device voice to text for Siri. Facial recognition in the photos app. Text recognition in photos. Scene recognition and adaptation in the camera app. And FaceID.

Not all those were available from day, except FaceID.

You have no idea what you're talking about. This is painful to read.
While I mostly agree with your point of Apple being rather aggressive with forced upgrading, I don't think the device requirements for these features were based solely on the desire to push out people with older devices, but rather due to the hardware requirements for a lot of the ML/AI features being based on the Apple Silicon, at least for the Mac side of things. As to why they drew the line at the iPhone 15, perhaps it's a similar reason regarding performance requirements. While obviously, I'm not intimately knowledgeable of their basis for the device requirements, I'd wait a few more years to see how the device requirements for these new features cascade. If they continue requiring newer and newer devices, only supporting the trailing generation or so, then I'd agree wholeheartedly with your sentiment.
I'm with you here. As a proud owner of an iPhone 13 Mini, I refuse to switch to anything bigger than that, but I do concede that any moderately useful AI pipeline will require more power than my aging phone is capable to provide.
I'll always slightly regret not getting a Mini, but 2020 was a really bad year for it to launch (when I didn't feel like the extra work needed to see one in person) and 2021 I actually needed a better camera.

In retrospect though, it may be best that I don't know what I missed.

It'll get you when the battery life drops to 4 hours screen time.
Yes, it's not arbitrary at all — they're only offering it on devices with at least 8GB of memory.

The iPhone 15 Pros were the first iPhones with 8GB. All M1+ Macs/iPads have at least 8GB of ram.

LLMs are very memory hungry, so frankly I'm a little surprised they support such low memory requirements (especially knowing that the system is running other tasks, not just ML). Microsoft's Copilot+ program has a 16GB minimum.

It's odd...I've gotten along fine without AI in my iPhone 13, and it will continue to work just as I have come to expect with the new iOS.

The new AI features will be available on the iPad Air I just ordered, and on my M1 MacBook Air, and I'll be able to play with them there until I'm ready to upgrade my phone. I think these new features sound great, but I'm not in any hurry to adopt them wholesale.

  • gumby
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> I think these new features sound great, but I'm not in any hurry to adopt them wholesale.

And if you don’t like them you don’t have to use them. I don’t use Siri and it doesn’t bother me that Apple includes it on all their machines.

> I don't use Siri

That's likely true. Unless you were careful to do a lot more than just disabling it, it does use you though, slurping up quite a bit of data.

reference? Proof? How is it slurping data if I have it turned off and don't even use it?
You have to disable the data slurping for each app. The main toggle just governs whether it responds to voice commands. Previous discussion [0].

Separately, the data Siri sends isn't held to the same differential privacy standards as some of Apple's other diagnostics. They just give you a unique ID and yolo it [1]. Unless personalized device behaviors are somehow less identifiable than all the other classes of data subject to deanonymization attacks (demographics, software/hardware version fingerprints, ...), that unique ID is just to be able to pretend to the courts that they tried (give or take Hanlon's razor).

[0] https://news.ycombinator.com/item?id=39927657

[1] https://www.apple.com/legal/privacy/data/en/ask-siri-dictati...

How?
I'm not following. What business did they lose if you weren't planning to upgrade? Maybe there is a misunderstanding of what's being gated in the release. I have an iPhone 13, and it was not a surprise to see that AI upgrades would require new hardware. Maybe I'll get a 16 if reviews confirm that it's good.
  • shrew
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
iOS18 will still be available for older devices right? From the looks of the preview, it’ll go back to phones from 2018 which is fairly standard for Apple. And I’d imagine older iOS versions will continue to receive security updates for several years after they’re dropped from the latest version.

What is it about this release that has lost your support? Specifically gating the Apple Intelligence stuff to the most modern hardware?

I mean, your pushed out to what? Lol your acting like android doesn't obsolete the shit out of their past cycle phones. I don't really get what you wanted them to do here, they're deploying AI in the OS and ecosystem where they can and the features that the hardware supports are being brought in, i don't see where they're implementing features that the hardware supports and are blocking "because"... I don't think anywhere they clarified what part of the cloud tools wont work on older versions. But at the end of the day old hardware is old... its not gonna support everything especially on generational shifts like how much better arm was over intel, or the fact that NPU's don't just manifest inside of old silicon
I'm confused. My understanding is that they didn't drop support for all Intel Macs in Sequoia. My 2018 MBP for example is still supported. The last Intel Mac Pro in your example is also still supported.

My MBP hasn't hasn't been _fully_ supported for many years. The M1-specific features started rolling out in 2021 - ability to run iPad apps being the most obvious one, the ability to get the globe view in maps being the most questionable. IIRC, my MBP did not yet have a M1 Pro/Max version available for sale when they announced the M1-specific features.

The point being, having AI features unavailable doesn't make the Mac unusable any more than it makes an iPhone 15 unusable. Those parts should continue to operate the way they do today (e.g. with today's Siri).

Apple isn’t magic and can’t defy physics. The chips on the older devices aren’t powerful enough to run the new features.

Hardware matters again now in a way it hasn’t for a couple of decades.

Requiring a new device for new features is not the same thing as removing support for older devices.
What? Your phone does everything it did when you bought it and will keep receiving important updates for years to come. How entitled are you to expect to receive every upcoming feature? And where else are you gonna get that? Lol
Not sure this is accurate - a lot of devices have been culled from receiving future updates with these releases. This is not anything new this year, I get that, and I don't really mind not getting the new AI features, but having devices that will stop being supported and which can't have any other software installed because of being locked down is really not a fun situation to be on this side of.

The old Macs can still install Linux/Windows/ChromeOS Flex. iPads/iPhones not so much.

  • tjmc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It will be interesting to see if there's an Osborne Effect on iPhone 15 (non-pro) sales now that the model is effectively stuck with brain-dead Siri.
It’s ironic how the one company that is WAY over the top wrt secrecy — not only to the public, but also and especially internally (they’ve even walled the hardware team from the software team while developing the iPhone!) — is at the same time the one company that really nails integration.
The key difference is that Apple (as an organization) appears to have an overarching roadmap (that spans multiple product lines). The secrecy is irrelevant as long as the leadership of each division is aligned (it hurts, but does not prevent success). Google, MS, and others are less organized at the top, so subdivisions of the overall org are left to plan for themselves and between each other, which leads to conflicts. Resolution may be achieved when things get pushed high enough, but only if it surfaces at the top for a leader (if such people exist in their org structure) to declare a resolution and focus for the groups involved.
This reads like a critique of centralized versus decentralized control, but I think it’s more about lack of clear intention.

Apple has a clear intent that allows the subsequent groups to work towards and contribute to it. Google and Microsoft don’t. They have a vague idea, but not something tangible enough for subordinate leaders to meaningfully contribute to.

As the chess players know, ‘a bad plan is better than none at all’. https://www.chesshistory.com/winter/extra/planning.html
Apple is odd in other ways. For example: Calculator on iPad. Once they had a few iPad releases without a calculator, they needed a sufficient _reason_ to release a calculator app for iPad. The product was gated by the narrative.

There was also likely no team on calculator at all (are there bugs that justify a maintenance team?), so it needed a big idea like 'Math Text' to be green-lit or it would simply never come. This is despite missing calculator being an obvious deficiency, and solving it via a port to be a relatively tiny lift.

People will scoff and say "Yeah, but all kinds of companies have internal firewalls, big deal". But no, these were literal walls that would appear over a weekend and suddenly part of the campus was off-limits to those not on the iPhone project.
Wow. Any place to read about that?
I found it in Fred Vogelstein's "Dogfight: How Apple and Google Went to War and Started a Revolution" [0]. Decent read, not amazing IMO, but with fun tidbits about the iPhone / Android launches

[0] https://libgen.is/book/index.php?md5=60B714243482AE3D7B9A83B...

  • gumby
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Any book about the development of the iphone or articles in the business press from a few months after product introduction (if any are still online…try Fortune).
I heard they did this for the Amazon Fire Phone, too
  • bbor
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Well tbf I’m not sure Google does project ownership… I was shocked how many seemingly important conversations ended with “well, I guess these days that functionality is owned by the community of relevant stakeholders…” (aka: owned by nobody at all). I think they’re only able to do what they’ve done through the sheer concentration of brilliant overpaid engineers, in spite of such “innovation”.

Totally agree on the AI points. Google may have incredible research, but Apple clearly is playing to their strengths here.

Could you please explain ‘lay bare organisational deficiencies’? I ask without skepticism.
Most companies don't have an unified platform they can build this on. And even if they seem to have superficially, the internal organisation is so splintered that it'll never happen.

Like what's going on inside Google, it's getting stupidly political and infighty. If someone tries to build a comprehensive LLM that touches on gmail, youtube, docs, sheets etc, it's going to be an uphill battle.

And even if they did, there'd be five competing efforts, two would be good or at least decent, four would be deployed (not the best one though), and all would be replaced in three years.

None of them would work on-device, all would leak your data into the training set.

And you forgot, if it was Google they would also all have their own internal chat service.
And two of them would be killed within a year and/or renamed/rebranded :)
They did do that. Back in 2013-2015 or so. It was called Google Now, and it was a bit like magic.

It showed you contextual cards based on your upcoming trips/appointments/traveling patterns. E.g. Any tube delays on your way home from work. How early you should leave to catch your flight.

This alongside Google Inbox was among the best and most "futuristic" products.

I was glad to see today Apple implementing something similar to both of these.

But it was a decent new Google product, hence all the past tenses.
Google Then™
Google Now was actually a great idea, it was the only "social" media I actually understood.

No wonder they killed it.

I've always been baffled why those two got canned. They were both really useful.
Canceling Google Inbox was when I started to move off their platform, it was their best product in years and finally got a handle on email chaos, and then they just killed it with no follow up, insane.
Nothing compared to the fighting between whether the Office team or the Platform team at Microsoft owned the AI 'client' work, if we were back in the Windows 7 days.

There'd be constant sabotage.

I assume they mean: expose internal corporate silos/VP-fiefdoms that don't work seamlessly together despite being marketed under the same brand
  • dmix
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Google is quite notorious for having this issue from various blog posts and HN comments I've read.

Lots of middle management power groups that would prevent a cohesive top down vision from easily being adopted.

This is the norm, not the exception for big companies.

The more I spend time in mid-large companies, the more I'm amazed that that Apple somehow managed to avoid releasing three different messaging apps that do the same thing.

  • ljm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The same as any enterprise company. It’s all office politics and bureaucracy.

Make no mistake, Google is Enterprise.

Now it makes sense why Elon Musk trims the fat on his companies ruthlessly and regularly
  • gumby
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The phenomenon is called “Conway’s Law”: a product reflects the organizational structure.

https://en.wikipedia.org/wiki/Conway%27s_law

Why wouldn't Microsoft be able to?

Anyway, while I see all of your points, none of the things I've read in the news make me excited. Recapping meetings or long emails or suggesting how to write are just...not major concerns to me at least.

  • dagw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Why wouldn't Microsoft be able to?

Microsoft seems to have lost all internal cohesion and the ability to focus the entire company in one direction. It's just a collection of dozens of small fiefdoms only caring about hitting their own narrow KPIs with no overall strategic vision. Just look at the mess of competing interest that Windows 11 and Edge have turned into.

They can’t even get marketing on the same page, such that they counter-productively confuse the hell out of their customers who might be considering giving them more money.

Quick, what’s “copilot”?

Automates the tedious/boring parts of flying between regions in Microsoft Flight Simulator. On higher difficulty levels, can use voice recognition to accept tasks ("Copilot, we are losing fuel, find the nearest airport we can land at", "Copilot, what is the VFR frequency for the airport?", etc.) Sometimes misunderstands tasks and/or will give erroneous information, to increase fidelity to real-world situations.
+ Teams, which includes a feature to build entire apps... inside Teams.
  • Onawa
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Oh God, when my partner started exploring using Power Apps for Teams to build a platform for running a clinical study, I was intrigued... Then horrified as I tried to help her get it setup. https://learn.microsoft.com/en-us/power-apps/teams/create-fi...
Maybe it's just my overly-cynical ass but when the parent comment said Teams+ lets you build apps inside Teams, I physically shuddered.
'We call it Apps for Teams Live 365+!'
The flip side, is that they would not have been able to execute so well with Azure etc if the Windows org had too much of a say about pushing Windows as the OS of choice everywhere. Winning in a brand new space, especially one that might be disruptive to other business units, sometimes necessitates letting the new org do its own thing.
"execute" and "well" in the same sentence when referring to Azure is a bit weird to see.
Only to the ignorant. Tens of billions of dollars a year are spent on the platform. Either all of those customers are misinformed about the capabilities of Azure or you are. I’m going to rely Occam’s Razor here.
Yeah, it's hard to believe that VSCode and Windows are products from the same company. Very different vibes.
  • araes
  • ·
  • 1 week ago
  • ·
  • [ - ]
Is VS Code the same as Visual Studio? Super confused.

Visual Studio Code, appears to be a code editor with support for C, C#, C++, Fortran, Go, Java, JavaScript, Node.js, Python, Rust, and Julia

Visual Studio, appears to be a code editor with support for 36 languages including C, C++, C++/CLI, Visual Basic .NET, C#, F#, JavaScript, TypeScript, XML, XSLT, HTML, and CSS.

Visual Studio Code, appears to be liked by almost every user and the favorite in a bunch of online polls.

Visual Studio, appears to be unusable junk, widely hated in almost every survey, and unable to even display its own error messages correctly in 2022.

Visual Studio Code is supposedly a "related product" according to Wikipedia: https://en.wikipedia.org/wiki/Visual_Studio#Related_products

How are these related? They seem like Microsoft's internal fiefdoms again.

Definitely different things, I've used them both. Visual Studio is from before they went on their Microsoft <3 Open Source campaign. It has pricey licenses and is pretty much an ad for .net

It's a traditional windows application. .net/WPF I think. Configured via XML

VSCode is free, an electron app, has a plugin store with lots of niche language plugins. Configured via json.

Surely they're using VSCode to exert influence in their Microsofty way, but it feels much less like a prison.

VSCode still feels like a bit much to me (though of a monster than Visual Studio). I'm pretty happy with helix.

  • araes
  • ·
  • 1 week ago
  • ·
  • [ - ]
Cool. Thanks. Started in with the newest version of VS 2022 recently for C++, and then find out there's apparently something better in a different internal fiefdom that people actually like.
> Why wouldn't Microsoft be able to?

they are irrelevant on the mobile ecosystem, a place where almost all this features are most relevant and useful

I've heard Microsoft has gotten better, but I think this still rings true. https://www.reddit.com/r/ProgrammerHumor/comments/6jw33z/int...
>suggesting how to write

As a(n amateur) fiction writer who pays too much for ProWritingAid each year, I'd love to see if this feature is any good for fiction. I take very, very few of PWA's AI-suggested rewrites, but they often help me see my own new version of a bit of prose.

"Anyone serious about software should be making their own hardware - Alan Kay" - Steve Jobs
Microsoft is trying and I feel they are in a much stronger position than Google. The same advantage that Apple has for personal docs and images Microsoft has across business content. Seamless AI integration across teams and outlook and sharepoint and other office products offer huge platform benefits.
  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What personal data do you have on Microsoft? It doesn’t even know where my photos are as the folder structure is at the end completely arbitrary - how could it execute “call the person on this photo”, and similar level of integration?

This Apple AI presupposes their strong ecosystem, which no one has anything similar to. Google was in a good position years ago, but they are criminally unfocused nowadays.

I know reading comprehension is difficult based on the overall quality of internet content these days. But I explicitly called out the business data Microsoft has in this domain. So why does Microsoft’s control of personal data factor into this at all? Do you have anything of value to add? Did you even bother to read what you are responding to? But sure. Go off on your anti Microsoft rant completely disconnected from the topic at hand.
For Google in particular, this was honestly something they could've done far earlier. They had the Pixel phones, they had the Tensor stuff, and then Gemini came along.

But for some reason, they decided to just stick to feature tidbits here and there and chose not to roll out quality-of-life UI features to make Gemini use easier on normal apps and not just select Google apps. And then it's also limited by various factors. They were obviously testing the waters and were just as cautious, but imho it was a damn shame. Even summarization and key points would've been nice if I could invoke it on any text field.

But yeah, this is truly the ecosystem benefit in full force here for Apple, and they're making good use of it.

Google couldn't figure out which messaging platform of theirs would succeed; imagine if the team working on hangouts or meet v1 had worked on RCS or context first.

RCS isn't fair, Google wanted the carriers to work on that, but in a disparate ecosystem they also couldn't come to a decision

  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Android CPUs are not playing in the same ballpark as Apple’s, there is always at least one generation difference between - and the core of these AI features is on-device intelligence, possibly by having managed to fit a good chunk of an LLM to an iphone. It being able to determine when to go online is the crucial part.

Also, apple bought up more ML startups than google or microsoft.

How long until the EU decides that making this a platform capability is a fineable offense?
That's actually exactly what you want. No one company should know what you do on all apps.
At this point, I trust {insert megacorp} more than {insert App Dev LLC} + {insert megacorp}.

Neither is great, but at least the megacorp has a financial incentive to maintain some of my privacy.

  • Sai_
  • ·
  • 1 week ago
  • ·
  • [ - ]
I don't quite understand this comment. Are you encouraging us to use your comment as some sort of template and insert our own preferred corporate names?

Sounds like some crazy level of meta where your brilliance is applicable to any pair of mega corps...which I don't buy.

Neither is Apple unless one buys wholly into the Apple ecosystem. I want open ai tools that I can truly use with all my text. But I'm not holding my breath.
"Open tools" and "Integration" is really hard to do.

I'd _love_ to be able to pull down my email from Fastmail, Calendars from iCal, notes from Google Notes etc to a single LLM for me to ask questions from, but it would require all of the different sources to have a proper API to fetch the data from.

Apple already has all it on device and targeted by ML models since multiple iPhone versions ago. Now they just slapped a natural language model on top of it.

>I always saw this level of personal intelligence to come about at some point, but I didn’t expect Apple to hit it out of the park so strongly

That's a little premature, let's try not to be so suckered by marketing.

Apple is again going where Google (the world's largest ad company) cannot follow: 100% user privacy.

They really hammered in the fact that every bit is going to be either fully local or publicly auditable to be private.

There's no way Google can follow, they need the data for their ad modeling. Even if they anonymise it, they still want it.

They literally announced their partnership with OpenAI today, and I've seen no sign of this data being "publicly auditable" - can you share this with me?
The OpenAI integration is a side-feature.

All the stuff that works on your private data is Apple models that are either on-device or in Apple's private cloud (and they are making that private cloud auditable).

The OpenAI stuff is firewalled off into a separate "ask ChatGPT to write me this thing" kind of feature.

> I've seen no sign of this data being "publicly auditable" - can you share this with me?

They announced it in the same keynote where they announced the partnership with OpenAI (and stated that sharing your data with OpenAI would be opt-in, not opt-out).

WTF are you talking about, the guy literally said that to connect to Apple Intelligence servers the client side verifies a publically registered audit trail for the server. He then followed up saying no data on chatgpt will keep session information regarding who the data came from.

Apples big thing is privacy, i doubt they'd randomly lie about that

This still runs on external hardware which can be spoofed at the demand of authorities. It may be private as in they themselves won’t monetize it but your data certainly won’t be safe
Ahhh cool encryption doesn't exist, MTLS doesn't exist i forgot
I can't speak towards Apple's or $your_government's trustworthiness, but MTLS wouldn't protect against an attack where Apple collaborates with a data requester.

There are people and orgs out there who (justifiably or not) are paranoid enough that they factor this into their threat model.

This is a bit academic right now, but it's also worth mentioning that in the coming years, as quantum computing becomes more and more practical, snapshots of data encrypted using quantum-unsafe cryptography, or with symmetric keys protected by quantum-unsafe crypto (like most Diffie-Hellman schemes) will be decryptable much more easily. Whether a motivated bad actor has access to the quantum infrastructure needed to do this at scale is another question, though.

How about you Google DMA Memory Attacks, VM Escape attacks, Memory scraping and sniffing, Memory Bus Snooping and so on.

As long as the data is processed externally, no software solutions make it safe, unless you yourself are in control of the premises.

[flagged]
"100% user privacy."

That is a huge stretch and a signal as to how good Apple is with their marketing.

If they are still letting apps like GasBuddy to sell your location to insurance companies then they are no where near "100% privacy".

GasBuddy is an optional app, right? Apple is very up front about what apps are going to get access to things like location, with user prompts to allow/deny. Meaning you are opting in to a lack of privacy, which is very expected behavior?

The default Apple apps (maps, messaging, safari) are solid from a privacy perspective, and I don't think you can say the same about the default apps on competitors phones.

I am sorry I used GasBuddy as an example since I agree it is a stretch, but still not one I disagree with.

But let's get back to Apple...if it was functioning at "100% user privacy" would it be able to give access to your data to law enforcement? As an example, I consider MullvadVPN to be 99% user privacy.

No.

That was concerning unlocking the phone. I’m talking about the data that they store on iCloud.

  • krrrh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I already linked to this article on Advanced Data Protection for iCloud (e2ee for most things) in a different comment, but it feels like a lot of people don’t know about this feature. It literally has zero effect on the user experience (except janky access to iCloud via the web, but shrug). Apple’s competitors don’t have anything close and their business models mean they probably never will.

https://support.apple.com/en-ca/102651

The data stored on iCloud is locked with the key from the devices’ Secure Enclave. They’d have to unlock your device to get access to decrypt the iCloud data.
Based on Apple's previous track record, the answer is very likely "no".
Why should apple be in control of what individual apps do with your location data? You explicitly grant the app access to your data, and agreed to the terms.

The difference between that and this is extremely clear is it not?

If I want a device that’s giving me apps on a locked in platform why shouldn’t they care about what the apps do with my information?

Imagine if we had a smart phone maker that Cared about this so we didn’t have to worry about it all the time?

Gas Buddy, like all 3rd party apps, has their privacy practices detailed on their App Store page. It's true that not all vendors are completely truthful with this information, but Gas Buddy (for one) appears to be pretty up-front: everything in the app is shared with the developers or others except (they say) diagnostic information. Apple set up a privacy-disclosure rule, Gas Buddy seems to be following it, and it's the user's choice whether to install Gas Buddy.

Apple has done its privacy work here; now it's up to the end user to make the final choice.

It's the potential for the model. Everyone else is hoovering the internet to model everything and Apple is sticking with their privacy message and saying 'how can I model your stuff to help you.'

That's tangibly different.

I beg to differ.

Example that should be super trivial: try to setup a sync of photos taken on your Iphone to a laptop (Mac or Windows or Linux) without going through Apple's cloud or any other cloud?

With an Android phone and Windows laptop (for example) you simply install the Syncthing app on both and you're done.

My point is not "Apple is worse", instead I'm just trying to point out that Apple definitely seems eager to have their users push a lot of what they do through their cloud. I don't see why their AI will be any different, even if their marketing now claims that it will be "offline" or whatever.

Apple is interested in providing products that they can guarantee will work, and meet actual user requirements.

"Sync my files without using Apple's cloud" is not a user requirement. Delivering features using their cloud is a very reasonable way for Apple to provide services.

Now, "Sync my files without compromising my privacy" is a user requirement. And Apple iCloud offers a feature called 'advanced data protection" [1] that end to end encrypts your files, while still supporting photo sharing and syncing. So no, you can't opt out of using their cloud as the intermediary, but you can protect your content from being decrypted by anyone, including Apple, ooff your devices.

It has the downside that it limits your account recovery options if you lose the device where your keys are and screw up on keeping a recovery key, so it isn't turned on by default, but it's there for you to use if you prefer. For many users, the protections of Apple's standard data protection are going to be enough though.

[1] https://support.apple.com/en-us/102651#:~:text=Advanced%20Da....

I'm a user and I require that feature. Transferring photos over a USB cable to a PC has been a feature in all portable electronics with a camera for the past 25+ years, yet Apple is still getting it wrong.
Wires? Oh yeah, I remember when things had wires. Good times.
> Wires? Oh yeah, I remember when things had wires. Good times.

Last I checked the more expensive Macbooks had three USB ports, and the cheap ones have two.

Since Macbooks no longer have ethernet ports, those USB ports are useful for plugging in the dongle when I want to connect the Macbook to an ethernet wire. Good times.

> Example that should be super trivial: try to setup a sync of photos taken on your Iphone to a laptop (Mac or Windows or Linux) without going through Apple's cloud or any other cloud?

The first hit on Google makes it look trivial with iPhone too?

https://support.apple.com/guide/devices-windows/sync-photos-...

> With an Android phone and Windows laptop (for example) you simply install the Syncthing app on both and you're done.

And with iPhone you just install the "Apple Devices" app: https://apps.microsoft.com/detail/9np83lwlpz9k

iCloud synchronizes all my stuff between all my devices (windows too) now. They've always been privacy-forward. I could completely see a container that spins up and AI's my stuff in their datacenter, that they don't have visibility into. The impact of them getting it wrong is pretty significant.
> Example that should be super trivial: try to setup a sync of photos taken on your Iphone to a laptop (Mac or Windows or Linux) without going through Apple's cloud or any other cloud?

Install jottacloud and enable the photos backup feature.

I just plug my iphone into my windows laptop and use the photo import tool built into windows. It works completely fine.

I also sync my photos onto my NAS via sftp, using the Photosync app.

  • dereg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Apple Intelligence stuff is going to be very big. iOS is clearly the right platform to marry great UX AI with. Latching LLMs onto Siri have allowed the Siri team to quickly atone for its sins.

I think the private compute stuff to be really big. Beyond the obvious use the cloud servers for heavy computing type tasks, I suspect it means we're going to get our own private code interpreter (proper scripting on iOS) and this is probably Apple's path to eventually allowing development on iPad OS.

Not only that, Apple is using its own chips for their servers. I don't think the follow on question is whether it's enough or not. The right question to ask is what are they going to do bring things up to snuff with NVDIA on both the developer end and hardware end?

There's such a huge play here and I don't think people get it yet, all because they think that Apple should be in the frontier model game. I think I now understand the headlines of Nadella being worried about Apple's partnership with OpenAI.

  • wayeq
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> allowed the Siri team to quickly atone for its sins.

Are we sure there is a Siri team in Apple? What have they been doing since 2012?

  • dereg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Learning how to write llm function calls.
  • rvnx
  • ·
  • 1 week ago
  • ·
  • [ - ]
There is also this thing with Siri and Google Assistant that a lot of the answers are manually entered (the jokes, etc), so the switch to an LLM could be a massive improvement.
I don't get this at all, how does integrating siri with a llm mean you get an interpreter and allowing development?
As much as I hoped for Xcode on the iPad, I still don’t think any of this AI stuff or “private cloud” is related.

Though I don’t know if I would use my iPad for programming even if it was possible, when I have a powerful Macbook Pro with a larger screen.

I do believe much of what they showed was impressive. It actually seems to realize the "personal digital secretary" promise that personal computing devices throughout the decades were sold on.

The most important question to me is how reliable it is. Does it work every time or is there some chance that it horribly misinterprets the content and even embarrasses the user who trusted it.

  • dom96
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yeah, reliability is the crucial bit. Like that example he showed where it checked whether he can make an appointment (by checking driving times), a lot can go wrong there and if the assistant tells you "Yes, you can" but you cannot then I can see lots of people getting angry and not trusting it for anything.
In the context of off-device processing, it's worth keeping in mind that US surveillance laws have recently expanded in their scope and reach:

https://www.theguardian.com/us-news/2024/apr/16/house-fisa-g...

  • ENGNR
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
For this reason, I really hope we can self-host our "private cloud" for use with apple devices. That would truly, properly allow end to end privacy. I don't trust Apple given the legislation you've just linked to, both claims obviously can't be correct.
Only a diminishingly small percentage of users have the ability to do this properly. I have 40 years of development experience and I don't trust my self to set up and properly run these types of servers.
  • ENGNR
  • ·
  • 1 week ago
  • ·
  • [ - ]
Fair, but we could conceivably have an ecosystem of providers, like ProtonMail or whoever the user feels comfortable with. If it's just Apple we're headed for honeypot
I’ve been waiting for Apple to arrive. They bring so much polish and taste.

Two features I really want:

“Position the cursor at the beginning of the word ‘usability’”

“Stop auto suggesting that word. I never use it, ever”

Apple auto suggest can be ducking annoying
legitimately good voice recognition would probably be the "killer feature" to get me to switch from android to iOS after all this time. I'm so frustrated with the current state of voice recognition in android keyboards, but ChatGPT's recent update is amazing at voice recognition. I type primarily by voice transcribing and I would be so happy if I could go from 70% voice 30% I need to type to 95% voice 5% I need to type.
One really powerful use case they demoed was that of meeting conflicts.

"Can you meet tonight at 7?" Me "oh yes" Siri "No you can't, your daughter's recital is at 7"

It's these integrations which will make life easier for those who deal with multiple personas all through their day.

But why partner with an outside company ? Even though it's optional on the device etc, people are miffed about the partnership than being excited by all that Apple has to offer.

The image generation is dalle 2.5 level and feels really greasy to me, beyond that I think the overall launch is pretty good! I also congratulate rabbit r1 for their timely release months before WWDC https://heymusic.ai/music/apple-intel-fEoSb
The generated image of two dice (https://x.com/thomasahle/status/1800258720074490245) was dalle 1 level.

Just randomly sprinkled eyes on the sides. I wonder why they chose to showcase that.

  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What eyer are you talking about? That’s two “hand-sketched” dice, isn’t it?
Did you look at the eyes/pips?

On the side with 5, they are overlapping. On the side with 4, some of them are half missing. On the side with 3, they are arranged in triangle instead of a straight line.

Not to talk about that 2 and 5 should be on opposing sides, same with 3 and 4.

It's basically like early AI being unable to generate hands, or making 6 fingers.

Yeah, the image generation felt really…cheap?…tasteless? but everything else was really impressive.
  • mholm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Personalization really feels like the missing link here. The images it creates are highly contextual, which increases their value dramatically. Nobody on Reddit wants to see the AI generated T. rex with a tutu on a surfboard, but in a group chat where your dancer buddy Rex is learning to surf, it’s a killer. The image AI can even use photos to learn who a person is. That opens up a ton of cool ways to communicate with friends
> in a group chat where your dancer buddy Rex is learning to surf, it’s a killer.

Maybe, but this class of jokes/riffs is going to get old, fast.

It's what i expected they weren't going to open the pandoras box of realistic photogen on imessage lol, thats why the limit to illustration, cartoon etc, is there to limit the liability of it going wild, they can add more "types" later as they get things more tested, realistically its just prompts hidden behind bubbles, but allows them to slowly roll out options that they've heavily vetted.
I think that basically stretched the limit of what local model can achieve today, which also makes their image API almost useless for any serious generative art developers.
Fwiw I don't think "serious generative art developers" are the target audience at this point, that's probably on the order of .01% of their users
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> The way Siri can now perform actions based on context

Given that this will apparently drop... next year at the earliest?... I think it's simply quite a tease, for now.

I literally had to install a keyboard extension to my iPhone just to get Whisper speech to text, which is thousands of times better at dictation than Siri at this point, which seems about 10 years behind the curve

Ooh, which keyboard extension is this?
Auri AI. Note that it's not free, and the way it works is via the clipboard, so it's a bit hacky, but mostly works well.
> the platform owners where you have most of your digital life

Yup! The hardest part of operationalizing GenAI has been, for me, dragging the "ring" of my context under the light cast by "streetlamp" of the model. Just writing this analogy out makes me think I might be putting the cart before the horse.

The UI design part? The integration part? The iteration part?

Apple products tend to feel thoughtful. It might not be a thought you agree with, but it's there.

With other companies I feel like im starving, and all they are serving is their version of grule... Here is your helping be sure to eat all of it.

Whenever I read that expression I have to think about the Porsche commercial from a few years back. I guess it’s not always a bad idea :)

https://assets.horsenation.com/wp-content/uploads/2014/07/dw...

> but I didn’t expect Apple to hit it out of the park so strongly.

No-one is hitting anything out of the park, this is just Apple the company realising that they're falling behind and trying to desperately attach themselves to the AI train. Doesn't matter if in so doing they're validating a company run by basically a swindler (I'm talking about the current OpenAI and Sam Altman), the Apple shareholders must be kept happy.

> No-one is hitting anything out of the park

I kind of feel like their walled garden and ecosystem might just have created the perfect environment for an AI integrated directly to the platform to be really useful.

I’m encouraged, but I am already a fan of the ecosystem…

I have no confidence this will work as intended. The last MacOS upgrade had the horrible UX of guessing which emoji you want and being wrong 95% of the time. I don't expect this to be any better. Demos are scripted.

I also expect it to fail miserably on names (places, restaurants, train stations, people), people that are bilingual, non-English, people with strong accents from English not being their first language, etc.

Do you think Apple could develop an AI so powerful that it would allow me to uninstall Siri from my iPhone?
You can turn it off in settings
>The way Siri can now perform actions based on context from emails

I did not see the announcement. Can Siri also send emails? If so then won't this (like Gemini) be vulnerable to prompt injection attacks?

Edit: Supposedly Gemini does not actually send the emails; maybe Apple is doing the same thing?

  • dudus
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It doesn't look like it does. It seems to only write the email for you but not send. At least yet.
It just writes the content, it doesn't actually send anything.

We'll find out later if there's an API to do something like that at all or are external communications always behind some hard limit that requires explicit user interaction.

Some of it will undoubtly be super useful. Things like:

- Proofread button in mail.

- ChatGPT will be available in Apple’s systemwide Writing Tools in macOS

I expect once you'll get used to it, it'll be hard to go without it.

> The way Siri can now perform actions based on context from emails and messages like setting calendar and reservations

I can't think of something less exciting than a feature that Gmail has supported for a decade.

Overall there's not a single feature in the article that I find exciting (I don't use Siri at all, so maybe it's just me), but I actually see that as a good thing. The least they add GenAI the better.

The difference is that this is on-device and private. Gmail just feeds your emails to Google's servers and they do the crunching. And in the meanwhile train their systems to be better using your content.
It changes nothing about the impressiveness (or lack thereof) of the feature.

Detecting an appointment from an email doesn't even require AI.

You're also over-indexing on the fact that some processing will be done on device. The rest will go to Apple's servers just the same as Google. And you will never know how much goes or doesn't.

Apple Mail has been able to detect appointments and reservations from email for years, just like Gmail -- and at least in my experience, Apple Mail pulls more useful information out of the mail when it creates the calendar entry. What they showed today is, in theory, something different. (I presume the difference is integrating it into the Siri assistant, not the mail application.)
The "AI" bit (a word they didn't mention during the keynote BTW) is the processing of a natural language user command to something the existing ML model can understand.

Most of the things shown during the keynote, can already be done with older iPhones - on device, but they need to be "talked to" like a computer, not with natural language that's not completely perfect.

> Most of the things shown during the keynote, can already be done with older iPhones - on device, but they need to be "talked to" like a computer, not with natural language that's not completely perfect.

That's only half true. If you get a text saying "Yo let's meet tomorrow at lunch?" it will offer an option to create an event from it, so even now it's possible in non-perfect scenarios.

Now the real question is: does getting the next 5% that wasn't possible justify sending potentially all you data to Apple's servers? I think the answer is a pretty resounding "fuck no".

Overall the announcement is extremely low value proposition (does anyone really use their stupid Bitmoji thing?) but asks for a LOT from the user (a vague "hey some stuff will be sent to our servers").

It is really “the app” that has to die in order for AI to show its potential in user interfaces. If you want to, say, order from a restaurant, your personal agent should order it for you, any attempt for the restaurant to “own” the consumer by putting an app in his face has to end.
I don’t think I am understanding what you mean, but isn’t one of the potential use cases of AI to say: “Siri order me the thing I always get from _restaurant_ and it navigates the app for you in the background? Potentially this can be done without API integration; the AI synthetically operates the app. Maybe it “watches” how you used the app before (which options you choose, which you dismiss, etc) to learn your preferences and execute the order with minimal interaction with the user. This way annoying, bad, UI can be avoided. AI “solves” UI in this way?

Are you saying this type of scenario kills the app, or are you saying the app needs to die, replaced by an API that AIs can interact with, thus homogenizing the user experience, and avoiding the bad parts of Apps?

Preferably the latter but if the agent can use a crappy app for you (maybe using accessibility APIs in some cases) that’s better than the bad experience that “download our app” usually is.

Better yet the system should know about all the commercial options available to you and be a partner in getting food you like, taking advantage of discounts, all of that.

An interesting consequence: I started to think about how I'll be incentivized to take more pictures of useful information, and might even try setting up a Proton Mail proxy so I can use the iOS Email app and give Siri more context
  • krrrh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I’m curious if simply running the proton mail bridge on a Mac at home would allow the native mail app feed “semantic” context across devices to iOS.
The native mail app would have my full inbox, so it could (presumably) do whatever local analysis it would do with a normal email account
Google is doing this as well but they are doing it on single app like gmail assuming all info is there and also across websites with agents but not cross apps like apple is doing across mails, messages, maps etc.
100%. Based on what I've seen so far, unified context is king.

Which at the backend means unifying necessary data from different product silos, into organized and usable sources.

Not to mention tied into their underlying SDK API that basically the whole system is based on, and seems they are using those same API's for the internal integrations so they can feel whats missing themselves as well.
I’ll be thoroughly impressed when Siri learns my wife’s name for good. Yes, I trained it, but somehow the lesson was forgotten.
You can set a relationship type in her contact card. I think Siri uses that data.
Ah, you mean my good friend 'heart emoji wife heart emoji'
We will see, in practice Siri has been pretty much useless even if hyped in demos. I keep pretty low expectations
The willingness you seem to have to sacrifice all your privacy for a few gimmicks is astounding
"brother didn’t bother to check the flight code I sent him via message when he asks me when I’m landing for pickup"

Yeah but what about people going to the wrong airport, or getting scammed by taking fake information uncritically? "Well it worked for me and anyway AI will get better.". Amen.

Even moreso why does brother take the time to bring up Siri if he can't read the flight code? It's the same thing correct?
You do know siri works while driving and other times when you don't want to go fumbling around?
I will believe it when siri isn’t the stupidest decade old idea ever. I’m sorry if I sound anything but snarky, but they have had Star Trek abilities this whole time, nerfed for “safety” and platform product integrity —from my iPhone
  • baby
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I just wanted a folding iPhone
The AI/Cartoony person being sent as a birthday wish was super cringey, like something my boomer father would send me. I'm a fan of genmoji. That looks fun. Less a fan of generated clip art and "images for the sake of having an image here", and way, way less into this "here, I made a cornball image of you from other images of you that I have" feature. It's as lame as Animoji but as creepy as deepfakes.
Aimed at a different demographics. Peepaw and Meemaw are absolutely going to love it.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yeah, the genmoji feel like a proper Apple feature, but the full images feel cheap and pointless.
LOL you haven't been in group chats with idiot drunk friends apparently shit like that kills, i had a friend who hates iphones, i sent a dozen bing ai images of him as a cartoon doing... things... to the phone... entire chat was dieing for days.
[flagged]
> AKA there would be a way to choose backends.

I think the percentage of iPhone users for who this would matter is very small. It's similar to how many people care about using a different browser than Safari on iOS (or Chrome on Google): in the US at least, those two browsers have ~95% market share.

> I think the percentage of iPhone users for who this would matter is very small.

Oh yeah? Then why don't they permit you to choose an App Store, a browser, a messenger, a blah blah blah...

I don’t follow. If the percentage of people who care about non Apple software is low then it totally makes sense that Apple wouldn’t care about opening their platform for 3rd party software. People didn’t jump ship to Android after all. Apple allow alternate app store in Europe because of regulation, not users revolting
You can set the default browser.

And there are plenty of messaging apps on iOS.

App store, sure, they don't allow sideloading, but that's a different matter (and the number of users, not devs, who care about that is even smaller)

You can only change your browser in the EU, since three months ago, because of their consumer protection laws.

In all other countries, the "browsers" on the App Store are only skins on top of a crippled version of Safari.

I'm in the US. I can set the default browser on my iPhone to Chrome, right in the Safari settings.
> I can set the default browser on my iPhone to Chrome, right in the Safari settings.

You can set your browser to a Chrome UI wrapper around a Safari Webview. You can do this with any [browser] UI wrapper around a Safari Webview, as long as [browser] has received the relevant entitlement from Apple.[0]

Outside of the EU, all browser apps on iOS must run Safari's engine.[1]

[0]: https://developer.apple.com/documentation/Xcode/preparing-yo...

> Request the default browser entitlement by filling out the Default browser entitlement request form. If your request is accepted you get both the default browser entitlement, and the com.apple.developer.browser.app-installation entitlement. If you have the default browser entitlement, fill out this form to receive the app-installation entitlement for your browser app.

[1]: https://developer.apple.com/documentation/browserenginekit

> Important: To distribute an app that uses an alternative browser engine, you need to request the relevant entitlements for your developer account. For more information and to request the entitlements, see Using alternative browser engines in the European Union.

I don't comprehend why people feel like being a Safari wrapper is sufficient.

How do people imagine ad blockers get implemented? Why do they assume ad blockers will be supported by Apple, which once ran an ad network and runs an ad network in Apple News, forever?

If publishers wanted to support only ad-block-blocking browsers, that's their prerogative too! I don't either think Apple should get to decide that ads are protected if you appear in Apple News, but ads are not protected if you appear in Mobile Safari.

People opposing choice: it never ceases to surprise me.

I understand the X wrapper on Safari engine, and I don't believe most people care about the underlying engine. Just like Microsoft Edge just being a Chromium browser, most people don't seem to care.
People care about the effects. For example, you can install adblockers on every version of Firefox except for the one on iOS.
I agree with you on this. People that care about adblockers (or something else) care about whether they can do that or not. A minority of people care about the non-default Safari web experience.
Those aren’t anywhere near the same from a technical point of view.
Seems like a silly gripe - why not buy android and have it all?
But...I can't have Safari on Android!
You can set any browser you want as long as it’s a skin on top of safari.
The browser thing is meant for security and privacy, why do you think they allow alternatives to all their other default apps expect for browsers?

But more than that, why are your arbitrary expectations any more important than their arbitrary requirements?

Because it's my device.
To me, everything about how it's been presented so far says the point of how it's set up is that they don't want to use backends. They want everything to happen on device. Even having ChatGPT for expanded queries is an unfortunate necessity driven by the hardware not being powerful enough yet.

How much is run entirely on device so far is unclear, but the sessions later in the day should expand on that.

On device or in an Apple owned DC. It sounds like they have aspirations for their own Apple owned LLM. ChatGPT seems like it's there until they can get something good enough to generally replace it for cases where their in-house solution isn't capable enough yet. They likely continue to invest heavily on big capable LLMs as well as ones that are small enough to run on device (while working on the hardware side to ensure they have the device capabilities to run more powerful models on the device).
The benefit of owning the last mile to the customer is that you can choose when you want to replace default Maps, or not.
So, the company that brought us Siri is going to build something better than ChatGPT... something that will run on-device no less. It's just not quite ready yet. Got it.
Siri was quite impressive when it came out. I just felt it never got significantly better until it became an embarrassment
  • seec
  • ·
  • 1 week ago
  • ·
  • [ - ]
It never was impressive. It only made cool demos and Apple aficionado worked the reality distortion field like crazy since then. It's actually embarrassing how bad it has always been compared to the Google stuff announced later with less fanfare.

I don't even care much because I don't think "assistants" are good for much of anything, but if I have to use one, Siri is not the one I would like.

  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yeah, by having literally a shitton of cash, and having bought multiple ML startups over the years. Plus it’s not like they couldn’t make Siri better, multiple projects had internal problems with Siri and were trying to get it replaced, but none went anywhere, possibly because higher ups planning ahead with this.
I don’t follow your second to last paragraph. It’s called Apple Intelligence. If you want to use something else do so but don’t expect Apple to build its own product and let you use whatever you want for it. Clearly the goal for Apple is to eventually use its own models and be an entirely in house product.
  • kdot
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I see the goal as a setup to pit LLM providers against each other to pay 10B a year to be the 2nd tap default.
Oh, yeah that is a possibility. I think though long term they are going to have Siri be a search engine that actually works in the way Google once did.
> Clearly the goal for Apple is to eventually use its own models and be an entirely in house product.

If that turns out anything like Siri, then surely you would understand why people want a bring-your-own-model framework.

If it's bad enough that you need to bring your own model, (a) Apple is toast, because this is, to some extent, the future; (b) you're hosed, because you can't optimize for battery life as well as Apple can.
Their desires may or may not be rational in this but clearly Apple isn’t going to allow it. They have a history of doing their own thing.
Their desires are expressly illegal, in many countries. We're past the whole antitrust stuff now, we are in a post-FAANG society and Apple doesn't just get away with that stuff for free anymore. I'm not crazy for expecting competition.

This is also coincidentally the reason I got rid of my Apple products. The experience fucking sucks! The only way things get better is if you buy more hardware, which probably makes the average mall-addicted American smile but makes me want to vomit. Everything is upsell, and not just upsell where you can improve the experience. No, you have to buy Apple's solution because every third-party is wrong and can't be trusted. They will gimp anyone that does not compensate them handsomely and rob the only people brave enough to offer their users an improved experience. The devil takes notes during Tim Cook's business meetings. They deserve everything coming to them, and they know it too.

So try it. Watch them go along, "doing their own thing", and then watch them come back limping and bruised after the FTC gives them a worse beating than Microsoft got in the 90s. We know why Apple is mad about this, it doesn't matter. They can go quietly or we can make this a long, protracted process. Microsoft got away lucky when you think about it, still all in one piece.

  • seec
  • ·
  • 1 week ago
  • ·
  • [ - ]
The funniest thing about current Apple is that they don't just upsell you on more hardware (that would be relatively fine) but they also hardcore upsell you on their "services" that are too often lackluster and fucked up in some bizarre way. To the point where they make using their devices without this service upsell and much less desirable experience.

It makes the whole point of paying more for better hardware completely moot, because if you going to need to rely on cloud "service" stuff, you might as well get cheaper hardware from the other companies who do that much better than Apple ever did.

I like their stuff (well the hardware at least) but I also don't want to buy more stuff from them, it's just not sustainable and they already make too much money anyway.

  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Can you replace the camera firmware in any of your devices? That’s the same category, I don’t think laws tell you anything about that.
Sure. I’m agnostic in all this but I do think Apple prefers to build their own AI and worry about anti-trust later.
How can you "worry about anti-trust later" when you are under active anticompetitive inquiry from three of your largest markets?
  • sib
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Because they want to build a better product for users rather than one that's been gimped by regulators?
In the same way a person or entity does anything. Just do it.
I don't comprehend how people are not more angry about how utterly shitty the Apple coerced apps are.
What is an example of utterly shitty app from Apple and which alternative is miles better?
Podcasts. Any other podcast app.
What do you mean into hands of platform owners? The point of having an Apple device is that you can run stuff on your device. The user is in control, not any platforms.
I think what they're getting at is that the platform owners have power because they can actually leverage the data that users give them to be useful tools to those users.

I would contrast this with the trend over the last year of just adding a chatbot to every app, or Recall being just a spicy History function. It's AI without doing anything useful.

I take it as 3rd party alternatives will have a much harder time because they have to ask the user to share their data with them. Apple / Google already have that established relationship and 3rd parties will unlikely have the level of integration and simplicity that the platformers can deliver.
Apple owns the platform. The user owns the device that embodies the platform.
  • gowld
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Apple owns the software platform. Can I run my non-Apple Intelligence software on the data in "my" iPhone?
Of course. There will be plenty of APIs that 3rd parties can use access the same data Apple Intelligence has access to.
They did mention they’re adding support for other providers.
  • croes
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
>Private Cloud Compute

But it runs in their cloud.

Aside from the search and Siri improvements, I'm really not sure about the usefulness of all the generative stuff Apple is suggesting we might use here.

If you spend an hour drawing a picture for someone for their birthday and send it to them, a great deal of the value to them is not in the quality of the picture but in the fact that you went to the effort, and that it's something unique only you could produce for them by giving your time. The work is more satisfying to the creator as well - if you've ever used something you built yourself that you're proud of vs. something you bought you must have felt this. The AI image that Tania generated in a few seconds might be fun the first time, but quickly becomes just spam filling most of a page of conversation, adding nothing.

If you make up a bedtime story for your child, starring them, with the things they're interested in, a great deal of the value to them is not in the quality of the story but... same thing as above. I don't think Apple's idea of reading an AI story off your phone instead is going to have the same impact.

In a world where you can have anything the value of everything is nothing.

I've got a fairly sophisticated and detailed story world I've been building up with my kid, it always starts the same way and there are known characters.

We've been building this up for some time, this tiny universe is the most common thing for me to respond to "will you tell me a story?" (something that is requested sometimes several times a day) since it is so deeply ingrained in both our heads.

Yesterday, while driving to pick up burritos, I dictated a broad set of detailed points, including the complete introductory sequence to the story to gpt-4o and asked it to tell a new adventure based on all of the context.

It did an amazing job at it. I was able to see my kid's reaction in the reflection of the mirrors and it did not take away from what we already had. It actually gave me some new ideas on where I can take it when I'm doing it myself.

If people lean on gen ai with none of their own personal, creative contributions they're not going to get interesting results.

But I know you can go to the effort to create and create and create and then on top of that layer on gen AI--it can knock it out of the park.

In this way, I see gen AI capabilities as simply another tool that can be used best with practice, like a synthesizer after previously only having a piano or organ.

That's a very valid rebuttal to my comment. I think this kind of "force multiplier" use for AI is the most effective one we have right now; I've noticed the same thing with GPT-4 for programming. I know the code well enough to double check the output, but AI can still save time in writing it, or sometimes come up with a strategy that I may not have.

Maybe the fact that you did the dictation together with your child present is also notable. Even though you used the AI, you were still doing an activity together and they see you doing it for them.

In fact, by allowing people to generate photos for birthday wishes, apple is elevating the bottomline not lowering the topline. The person who wants to put in the effort and send a hand-drawn image would often not want to resort to a ready-made machine creation. OTOH, the simple "HBD Mom" sender would now send "Happy Birthday Mom <genmoji>" and an image...
  • s3p
  • ·
  • 1 week ago
  • ·
  • [ - ]
Oh god.. if someone sent me Ai generated slop for my birthday I would be bothered. A simple happy birthday is fine!
What about things like GIPHY reactions? I’m guessing you’re not a fan of those either or using quotes from well known people. There shortcuts have existed as long as people have been writing or drawing. They just get easier and more powerful over time.

I view this as just extending that to custom reactions and making them more flexible expanding their range of uses.

The best articulation of what the industry is currently calling “AI” is “augmented intelligence”—this wording captures that these are tools that can enhance intelligence, not replace it or come up with its own ideas.
Meta comment: This back-and-forth between Nition and bredren is one of the best exchanges I’ve read on HN recently. Thanks to both of you.
Do you think as much creativity and effort would have gone into the story if you had access to AI from the start?
Given current interfaces, yes. AR via smartphone or otherwise remain invasive to interpersonal communication in 2024.
You could say the same thing for sending a Happy Birthday text, versus a hand written letter or card. Nothing is stopping a person from sending the latter today, and yes they are more appreciated, but people also appreciate the text. For example, if you're remote and perhaps don't have that deep of a relationship with them
If a friend of mine sent me some AI generated slop for my birthday I'd be more offended than if they just sent me a text that only contains the letters "hb"
Birthday cards are slop too
The messages inside of them, which are presumably not AI-generated, aren't however.
I guess the question is, is sending an AI Happy Birthday image better than sending a Happy Birthday text?
Nope their identical, but the AI one at least looks cool lol
Your analogy does not apply at all.
The value of a gift isn't solely on how much you worked on it or what you spent on it. It can also be in picking out the right one, if you picked something good.

Context will be more important when the gift itself is easy.

I would argue the same thing applies when you buy a card from Hallmark
I sometimes think the physical world has been going through a similar time, where most of what we own and receive is ephemeral, mass-produced, lacking in real significance. We have a lot more now but it often means a lot less.
having been bombarded with forwards of "good morning" image greetings from loved ones on a daily basis, i can definitely attest to this sentiment.

ai spam, especially the custom emoji/stickers will be interesting in terms of whether they will have any reusability or will be littered like single-use plastic.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
LOL that image you painstakingly created is also forgotten not long after being given to most people, just because you know the effort that went in doesn't mean the receiving person does 99.9% of the time.

Same thing for your kid, the kid likes both stories, gives 0 shit that you used GenAI or sat up for 8 hours trying to figure out the rhyme, those things are making YOU feel better not the person receiving it.

I think it would be clear that the picture was drawn for the person - I imagine most people would explicitly say something like "I drew this for you" in the accompanying message. And I don't know what kind of kids you've been hanging around, but my daughter would definitely appreciate a story that I spent some time thinking up rather than "here's something ChatGPT came up with". I guess that assumes you're not going to lie to kids about the AI-generated being yours, but that's another issue entirely.
You go into "HOW" you write a poem for your daughter? are you also explaining and rubbing in how hard you work to get her food on the table? Like wow, the amount of people here that want their "effort" calculated into the "love" cost of something is insane.

I was brought up that the thought matters, if i think to call my mom she appreciates it i don't need to make some excess effort to show her i love her or show her more love.

You read your daughter a book off your phone you got for free, is that somehow worth less than a book you went to barnes and noble and paid full price for?

With my original bedtime story example, I was actually thinking about the kind of story you make up on the spot. Like the topic request comes at bedtime, and maybe the child even has feedback on how the story should go as you're making it up. The alternative of the parent quickly asking ChatGPT on their phone for a story on the selected topic just seems not as fun and meaningful.

I guess in Apple's example it looks like they're writing it as a document on MacOS, so I suppose they are writing it ahead of time.

This is because there actually is a calculation that people do between "effort" and "love" (it's not some 1:1 ratio and you can't calculate it, it's real). At least for the vast, vast majority of people with functional interpersonal skills...

It's the difference between calling your mom and just saying "Hi mom, this is me thinking to call you. bye." vs calling her with a prepared thing to say/ask about that you had to take extra time to think about before calling. Effort went into that. You don't need to tell her "HOW" you came up with what you wanted to talk about, but there is a difference in how your call will be received as a result.

If you really believe that sending a text versus a hand written card will have no difference on how the message is interpreted, you should just know that you are in the minority.

  • tines
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> those things are making YOU feel better not the person receiving it

I don't think this is true at all. Love is proportional to cost; if it costs me nothing, then the love it represents is nothing.

When we receive something from someone, we estimate what it cost them based on what we know of them. Until recently, if someone wrote a poem just for us, our estimation of that would often be pretty high because we know approximately what it costs to write a poem.

In modern times, that cost calculation is thrown off, because we don't know whether they wrote it themselves (high cost) or generated it (low/no cost).

Love is proportional to cost?!?!?!?! Holy shit thats fucking weird, it costs me 0 to love my mom, i love my mom lol, that doesn't change that fact. Broke mother that can't afford to take care of her kid doesn't not love the kid or vise versa.

If your calculating "cost" for if someone is showing nuts, i feel sad for you lol, if my wife buys or makes me something or just says "i love you" they are equivalent, I don't give a shit if she "does something for me that costs her something" she loves me she thought of me.

The thought is what matters, if you put extra worth on people struggling to do something meaning more love... thats... ya

  • tines
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think I see what tripped you up in my comment. I said

> if it costs me nothing, then the love it represents is nothing.

You could read this as meaning that every action has to be super costly or else the love isn't there. I admit that it's poorly phrased and it's not what I meant.

What I should have said is that if it costs you nothing, then it doesn't in itself indicate love. It costs me nothing to say "I love you" on its own, and you wouldn't believe me if I just walked up to you in the street and said that. But your mom has spent thousands of hours and liters of blood, sweat and tears caring for you, so when she says "I love you," you have all that to back those words up. It's the cost she paid before that imbues the words with value: high cost, deep love.

Hopefully that makes more sense.

He uses cost in a context of time and effort not directly financial
I can't help but be baited into responding to this comment too lol

You are obviously willfully misinterpreting what the OP meant by "cost".

You say "the thought is what matters" - this is 100% true, and "the thought" has a "cost". It "costs" an hour of sitting down and thinking of what to write to express your feelings to someone. That's what he is saying is "proportional" to love.

It "costs" you mental space to love your mom, and that can definitely happen with $0 entering the equation.

And with respect to "extra worth on people struggling to do something meaning more love" - if you spend the time to sit down and write a poem, when that's something that you don't excel at, someone will think: "oh wow you clearly really love me if you spent the time to write a poem, I know this because I know it's not easy for you and so you must care a lot to have done so anyway". If you can't see that... thats... ya

> You say "the thought is what matters" - this is 100% true, and "the thought" has a "cost". It "costs" an hour of sitting down and thinking of what to write to express your feelings to someone. That's what he is saying is "proportional" to love.

So if you sit down and thinking of what to write to express your love to your mom for two hours, then you love your mom twice than the person who only sit down for one hour loves his mom?

It's what "proportional" means. Words have meanings.

I never said that spending two hours means you love someone twice as much as spending one hour, I'm not sure where you're getting that from.

You also may be shocked to learn this, but "proportional" doesn't mean 1:1. It can mean 1:2, 5:1, or x:x^e(2*pi). All of those are proportions. Words do have meanings, and you'll note that - while I didn't even misuse the word proportional - the quotations also indicate I'm using it more loosely than it's textbook definition. You know, like how a normal person might.

I'm getting the vibe from you and the other commentator that, to you, this is about comparing how much two people love their respective mothers. That's not at all what this is even about? You can't compare "how much" two people love a respective person/thing because love isn't quantifiable.

I'm really not sure what you're even taking issue with? The idea that more time and effort from the giver corresponds to the receiver feeling more appreciation? That is not exactly a hot take lmfao

Consider this scenario: Your friend sends you an image of some art they produced. It looks very impressive. You ask them how long it took them to create. They say oh, only a minute or so, I made it with a MidJourney prompt.

Do you feel disappointed in that answer? If yes, then surely you see that appreciation of something can be relative to effort.

  • ·
  • 1 week ago
  • ·
  • [ - ]
Love is somewhat related to cost, but "proportional" is definitely not the word you want.

If love is proportional to cost, then rapists and psychos who kill their SOs are the true lovers since the cost is 20 years of jail time to life sentence. Do you want to live by this standard?

  • tines
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Actually you prove my point; the psycho loves himself so much that he will risk prison to get what he wants (or keep others from having it), but he doesn't love his SO enough to pay the cost of letting her go.
I don't truly agree with your take here, but let's assume you are correct and creating real things in your life only benefits you and no-one else. If you create a painting or story or piece of furniture, others prefer the more professional AI or mass-produced version.

In that scenario certainly there'll be times when using the AI option will make more sense, since you usually don't have hours to spare, and you also want to make the stories that your kid likes the most, which in this scenario are the AI ones.

But even then there's still that benefit to yourself from spending time on creating things, and I'd encourage anyone to have a hobby where they get to make something just because they feel like it. Even if it's just for you. It's nice to have an outlet to express yourself.

What a cynical take!
Their demos looked like how I imagined AI before ChatGPT ever existed. It was a personalized, context aware, deeply integrated way of interacting with your whole system.

I really enjoyed the explanation for how they planned on tackling server-enabled AI tasks while making the best possible effort to keep your requests private. Auditable server software that runs on Apple hardware is probably as good as you can get for tasks like that. Even better would be making it OSS.

There was one demo where you could talk to Siri about your mom and it would understand the context because of stuff that she (your mom) had written in one of her emails to you... that's the kind of stuff that I think we all imagined an AI world would look like. I'm really impressed with the vision they described and I think they honestly jumped to the lead of the pack in an important way that hasn't been well considered up until this point.

It's not just the raw AI capabilities from the models themselves, which I think many of us already get the feeling are going to be commoditized at some point in the future, but rather the hardware and system-wide integrations that make use of those models that matters starting today. Obviously how the experience will be when it's available to the public is a different story, but the vision alone was impressive to me. Basically, Apple again understands the UX.

I wish Apple the best of luck and I'm excited to see how their competitors plan on responding. The announcement today I think was actually subtle compared to what the implications are going to be. It's exciting to think that it may make computing easier for older people.

Until this gets into reviewers' hands, I think it's fair to say that we really have no idea how good any of this is. When it comes to AI being able to do "all kinds of things," it's easy to demo some really cool stuff, but if it falls on its face all the time in the real world, you end up with the current Siri.

Remember this ad? https://www.youtube.com/watch?v=sw1iwC7Zh24 12 years ago, they promised a bunch of things that I still wouldn't trust Siri to pull off.

  • fckgw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
These are all very basic commands that Siri pulls off flawlessly whenever I use it.
  • seec
  • ·
  • 1 week ago
  • ·
  • [ - ]
Most of the stuff shown are much faster to do yourself if you have your hands free and if you don't you have to pray the gods that Siri doesn't fuck up for whatever reason.

Even something as simple as setting the time, Siri will bork it at least 1 in 10 times. I know that for sure, since I worked at a friend's restaurant 2 summers ago and was heavily using Siri's timer to time french fries blanching (many batches for at least 2 hours every day or every 2 days); this dam thing would regularly use wrong time or not understand at all even though it was always the same dam time and the conditions were always similar.

On the other hand, the Google home at my cousin's place operates at my command without mistakes even though he doesn't even have the luxury of knowing my voice.

People who think Siri is good either are delusional or have special godlike skills. But considering how many hilarious "demos" I have gotten from Apple fans friends; I will say it's the former.

I myself use iPhone/Apple Watch/Macs since forever so it's not like I'm free hating. I just goddam suck like too many Apple stuff recently...

  • fckgw
  • ·
  • 1 week ago
  • ·
  • [ - ]
> People who think Siri is good either are delusional or have special godlike skills. But considering how many hilarious "demos" I have gotten from Apple fans friends; I will say it's the former.

Or maybe they just have good experiences? Why do they have to be delusional?

  • seec
  • ·
  • 1 week ago
  • ·
  • [ - ]
Because they usually are in a tech bubble where Apple is the best there is on everything and they never really tried any alternatives.

So, they think it's good and it's a delusion because it would not be objectively considered good if it was compared side by side with the competition.

I know that for sure because I have spent a lot of time with people like that and I used to be a bit like that. It's much easier to see the world in black and white for most, just like religions are with good/bad and people who really like Apple stuff are very often like that.

  • wilg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think too many people assumed that because ChatGPT is a conversation interface that that's how AI should be designed, which is like assuming computers would always be command lines instead of GUIs. Apple has done a good job of providing purpose-built GUIs for AI stuff here, and I think it will be interesting to watch that stuff get deeper.
> There was one demo where you could talk to Siri about your mom and it would understand the context because of stuff that she (your mom) had written in one of her emails to you... that's the kind of stuff that I think we all imagined an AI world would look like.

I can't but feel all of this super creepy.

  • TillE
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
We're really just describing an on-device search tool with a much better interface. It's only creepy if you treat it like a person, which Apple is pretty careful not to do too much.
Yep it's an assistant, they didnt add some weird app where you can talk to virtual granny lol
Yep.

I remember vividly the comment on Windows Recall that said if the same was done by Apple it would be applauded. Here we are.

At the risk of sounding like an Apple apologist, Apple has a pretty good (though not perfect) track record for privacy and security.

Microsoft on the other hand… well, I understand they just pulled the recall feature after it was discovered the data wasn’t even encrypted at rest?!

If anything Recall is MORE privacy respectful than this since everything is stored and processed on your device and you can access (and easily alter) the database, exclude specific applications, websites (for Edge for now), etc.

I'm not saying it's not an awful feature, I will disable it as soon as it is installed.

The fact that it's not encrypted at rest really is the least of my concerns (though it does show the lack of care and planning). For this to be a problem, an attacker already has all the necessary accesses to your computer to either get your encryption key or do devastating damage anyway.

> At the risk of sounding like an Apple apologist, Apple has a pretty good (though not perfect) track record for privacy and security.

"Not perfect" is enough to be concerned. I would also not be surprised that their good reputation is more due to their better ability at hiding their data collection and related scandals rather than due to any care for the user.

I thought that the problem with Recall is that it takes screenshots (potentially of sensitive things like passwords, or private browsing sessions) and stores new data that you never intended to store in the first place.

This Apple AI is not storing anything new, it’s just processing the data that you already stored. As long as they pay close attention to invalidation on the index when things get deleted.

The cloud processing is a little concerning but presumably you will be able to turn it off, and it doesn’t seem much different to using iCloud anyway.

  • seec
  • ·
  • 1 week ago
  • ·
  • [ - ]
The screenshots are not storing anything new, it's just a visual trail of an already existing activity. It literally just makes it easier to browse the history, that's it. Someone motivated could just recompose activity from logs/histories of the various softwares.

The distinction is made by people who seem hell bent on trashing Microsoft for everything and glorifying everything Apple does.

I strongly disagree. My expectation of what’s on my screen is that it’s ephemeral unless I take a screen shot.

Here’s an example. I always use a random password when creating accounts for (eg) databases, but not every UI supports this, so I have a little shell script that generates one. I then copy and paste it from the terminal. Once I close the terminal window and copy something else, that password is stored only once.

With recall, it’s now stored permanently. Someone who gets access to my screen history is a step closer to getting into my stuff.

Of course there are workarounds. But the expectation I have around how my screen work informs the actions I take on it.

Here’s another example. I recently clicked on a link from HN and ended up on a page containing explicit images. At work. As soon as I realised what I was looking at, I clicked away.

How long until my visual history is to be interrogated, characterised, and used to find me guilty of looking at inappropriate material in the workplace? Such a system is not going to care about my intentions. Even if I’m not disciplined, I’d certainly be embarrassed.

I don't think the above poster was really referring to who does it, but that it's creepy that you're having a conversation about your mom with your phone to begin with
As opposed to what: If you hired an actual human assistant, it wouldn't be?
Having other people read through my stuff and respond for me is creepy regardless.
This is what executive assistants do all day.

Some people view house keepers the same way. “I can’t let someone going through and touch all of my personal belongings. That’s just creepy.”

There’s a wide range of what people find creepy and also what people can and do get used to.

Assistants are generally limited to people who can afford to have one. I think that's a fair assumption. Out of all those people not everyone in that group is going to have one. Which leaves a very very few people that do have one.

Why would this translate to everybody wanting to have one?

What does that have to do with creepiness?
How many people out there are hiring personal assistants?
This something else is it pushes people to even more heavily dive into the ecosystem, if it works how they show you really want it to understand your life, so you'll want all your devices able to help build that net of data to provide your context to all your devices for answering about events and stuff, meaning hey maybe i should get an appletv instead of a chromecast so that siri knows about my shows and stuff too.
I'm just unhappy that this will mostly end up to make the moat larger and the platform lock-in more painful either way. iPhones have been going up in price, serious compute once you're deep in this will be simply extortion, as leaving the apple universe is going to be nigh impossible.

Also no competitor is going to be as good at integrating everything, as none of those have as integrated systems.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
i'd be skeptical of the marketing used for the security/privacy angle. won't be surprised if there is subopena-able data out of this in some court case.

i might have missed it but there has not been much talk about guardrails or ethical use with their tools, and what they are doing about it in terms of potential abuse.

You can read the details of their approach for the privacy/security aspects of the cloud compute portion here. https://security.apple.com/blog/private-cloud-compute/
Question I have is how deeply it is integrated in non-Apple apps. Like Signal (still no Siri support) or Outlook.
It sounds like the app creators need to built in the support using SiriKit and app intentions. If they're using either already a fair bit of integration will be automatic.
Hope I can keep apples fingers from getting "deeply integrated" with my personal data.
[dead]
  • maz1b
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Gotta say, from a branding point of view, it's completely perfect. Sometimes things as "small" as the letters in a companies name can have a huge impact decades down the road. AI == AI, and that's how Apple is going to play it. That bit at the end where it said "AI for the rest of us" is a great way to capture the moment, and probably suggests where Apple is going to go.

imo, apple will gain expertise to serve a monster level of scale for more casual users that want to generate creative or funny pictures, emojis, do some text work, and enhance quality of life. I don't think Apple will be at the forefront of new AI technology to integrate those into user facing features, but if they are to catch up, they will have to get into the forefront of the same technologies to support their unique scale.

Was a notable WWDC, was curious to see what they would do with the Mac Studio and Mac Pro, and nothing about the M3 Ultra or M4 Ultra, or the M3/M4 Extreme.

I also predicted that they would use their own M2 Ultras and whatnot to support their own compute capacity in the cloud, and interestingly enough it was mentioned. I wonder if we'll get more details on this front.

I think the biggest announcement was the private compute cloud with Apple Silicon. Apple is building up internal expertise to go after Nvidia.
  • dmix
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Can you explain what that means for someone who missed part of the video today?
The Apple Intelligence cloud system uses Apple's own M-series chips, not Nvidia.
Because they will be running inference using much smaller models than GPT 4.
At least they are honest about it in the specs that they have published - there's a graph there that clearly shows their server-side model underperforming GPT-4. A refreshing change from the usual "we trained a 7B model and it's almost as good as GPT-4 in tests" hype train.

(see "Apple Foundation Model Human Evaluation" here: https://machinelearning.apple.com/research/introducing-apple...)

Yea, their models are more targeted. You can't ask Apple Intelligence/Siri about random celebrities or cocktail recipes.

But you CAN ask it to show you all pictures you took of your kids during your vacation to Cabo in 2023 and it'll find them for you.

The model "underperforms", but not in the ways that matter. This is why they partnered with OpenAI, to get the generic stuff included when people need it.

  • yborg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Isn't it also that Nvidia chips are basically unobtainable right now anyway?
Yeah, but Apple wouldn’t care either way. They do things for the principle of it. “We have an ongoing beef with NVIDIA so we’ll build our own ai server farms.”
Apple have a long antagonist relationship with NVIDIA. If anything it is holding Apple back because they don’t want to go cap in hand to NVIDIA and say “please sir, can I have some more”.

We see this play out with the ChatGPT integration. Rather than hosting GPT-4o themselves, OpenAI are. Apple is providing NVIDIA powered AI models through a third party, somewhat undermining the privacy first argument.

Rumours say that Apple has bought a lot of GPUs from Nvidia in the last year or so in order to train their own models.
  • s3p
  • ·
  • 1 week ago
  • ·
  • [ - ]
Not really. They use ChatGPT as a last resort for a question that isn't related to the device or an Apple-related interaction. Ex: "Make a recipe out of the foods in this image" versus "how far away is my mom from the lunch spot she told me about". And in that instance they ask the user explicitly whether they want to use ChatGPT.
I see what they did here and it is smart, but can bring chaos. On one side it is like saying "we own it", but on the other hand it is putting a brand outside of their control. Now I only hope people will not abbreviate it with ApI, because it will pollute search results for API :P
Yeah I feel like we are getting the crumbs for a future hardware announcement, like M4 ultra. They’ll announce it like “we are so happy to share our latest and greatest processor, a processor so powerful, we’ve been using it in our private AI cloud. We are pleased to announce the M4 Ultra”
It was speculated when the M4 was released only for the iPad Pro that it might be out of an internal need on Apple's part for the bulk of the chips being manufactured. This latest set of announcements gives substantial weight to that theory.
Yeah that seems very reasonable/likely. The release of the training toolkit for Apple silicon too points that way: https://github.com/ml-explore/mlx-examples/tree/main/transfo...
Yeah real smart move to make your products initials unusable and unsearchable. Apple has done it again
Indeed. I suppose they are hoping people will associate the two letters with their thing rather than the original acronym.
People will just call it Apple AI like ATM machine.
I remain skeptical until I see it in action. On the one hand, Apple has a good track record with privacy and keeping things on device. On the other, there was too much ambiguity around this announcement. What is the threshold for running something in the cloud? How is your personal model used across devices - does that mean it briefly moves to the cloud? How does its usage change across guest modes? Even the phrase "OpenAI won’t store requests" feels intentionally opaque.

I was personally holding out for a federated learning approach where multiple Apple devices could be used to process a request but I guess the Occam's razor prevails. I'll wait and see.

> Apple has a good track record with privacy and keeping things on device.

Apple also has a long track record of "you're holding it wrong". I don't expect an amazing AI assistant out of them, I expect something that sometimes does what the user meant.

> Apple also has a long track record of "you're holding it wrong".

And yet this was never said.

Closest was this:

> Just don't hold it that way.

Or maybe this:

> If you ever experience this on your iPhone 4, avoid gripping it in the lower left corner in a way that covers both sides of the black strip in the metal band, or simply use one of many available cases.

It's merely the instance that gave the name to the phenomena, not the only time it happened.
What phenomena?
When Apple published a webpage about how other phones also got reduced reception when you held them in a particular way, but then basically immediately pulled it. And then a while later they offered a free bumper case to mitigate the whole issue.
None of that suggests any malice. We don’t know what happened internally, other than the arial designer was eventually let go. That engineer could have been pushing the “every phone has the problem” narrative and brushing it off. At some point the pressure from customer feedback could have meant they were overruled and ordered to retest, or test under the specific conditions.

The fact that Apple changed their stance from “here’s a workaround” to “here’s a free bumper” is a sign they reacted to something, and that could have been anything from the conclusion of internal testing to a PR job to keep customers happy.

If they had said there was no design flaw from the start and stuck with that the whole way then I’d understand people’s reaction, but all I see is a company that said “don’t hold it that way” as a workaround then eventually issued free bumpers, thus confirming the issue. That doesn’t suggest they were blaming the user for doing something wrong. The sentiment just wasn’t there.

Apple don't react to anything until there's a large enough outcry about it, rather than immediately address the issue they wait to see how many people complain to decide if it's worth the negative press and consumer perception or not.

Everyone makes fun of Sammy batteries exploding, but forget antennagate, bendgate, software gimping of battery life, butterfly keyboards, touch disease, yellow screens (which I believe were when Apple had to split supply Samsung/LG), exploding Macbook batteries (not enough to cause a fuss tho). Etc.

Other companies can of course be ne'er-do-wells, but people actively defend Apple for the company's missteps.

I rarely see anyone defending Apple, but I do see people constantly applying logic to them specifically that they don’t seem to apply to other companies. Take this:

> Apple don't react to anything until there's a large enough outcry about it, rather than immediately address the issue they wait to see how many people complain to decide if it's worth the negative press and consumer perception or not.

You can’t immediately address any issue. You need time to investigate issues. You might not even start investigating until you hit some sort of threshold or you’ll be chasing your tail in every bit of customer feedback. It takes time to narrow down anything to a bad component, bad batch, software bug or whatever it is.

As for weighing whether the issue is worth addressing at all - this is literally every company. If you did a recall of every bit of hardware at the slightest whiff of an issue you’d go bankrupt very quickly. There are always thresholds.

I wish we would just criticise apple in the same way we do with other companies. There is no need to invent things like “you’re holding it wrong” or intentionally misunderstanding batterygate into “they slowed down phones to sell you a new one”. They already do other crappy things, inventing fake ones isn’t necessary.

> What is the threshold for running something in the cloud?

To be fair, this was just the keynote -- details will be revealed in the sessions.

> has a good track record with privacy

They repeated this so many times they've made it true.

Do you have proof otherwise? Compared to the competition, who openly use everything about you to build a profile.
The iPhone will let you install an app only if you tell Apple about it. It will let you get your location only if you also give that location to Apple. The only way to get true privacy is to give users control, which even Google-flavored Android builds provide more of than iOS.
After so many years, so many people still believe in this user control paradigm.

Giving users control works for the slim percentage of power users. Most users will end up obliterated by scammers and other unsavory characters.

Perhaps there is a way to give control to today's users (that includes my non-technical mother) and still secure them against the myriad of online threats. If anyone knows of a paper or publication that addresses this, I'd love to read it.

If you want privacy, that's the only way to get it. As Apple has demonstrated, giving the platform owner control means eroding your privacy with no recourse and still getting obliterated by scammers.
  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> The iPhone will let you install an app only if you tell Apple about it

That’s not 100% true, and where it is, there is a good reason, and pretty much every other store does it (being able to revoke malware)

It's 100% true. On Android, you don't have to use a store, and you don't have to tell anybody anything if you don't use a store.
I get the sense there's still a lot of work to be done over the next few months, and we may see some feature slippage. The betas will be where we see their words in action, and I'll be staying far away from the betas, which will be a little painful. I think ambiguity works in their favor right now. It's better to underpromise and overdeliver, instead of vice versa.
They need to provide a mechanism to view the data being uploaded by you
Same they say privacy so many times i got Facebook PTSD.
I mean theres a difference between these companies on their privacy stance historical and current.
> Apple has a good track record with privacy and keeping things on device.

I mean they have great PR, but in terms of privacy, they extract more information from you than google does.

Do you have a source for this?

Google is an ad company, they have a full model of what you like and dont like at different states of your life built.

What does Apple have that's even close?

Apple is also an ad company,

they generate between $5-10B on ads alone a year now and more importantly that is one their fastest growing revenue segment .

Add the context of declining revenue from iPhone sales. That revenue and its potential will have enormous influence on decision making .

The thesis that Apple doesn’t have ads business so there is no use to collect the data is dead for 5years now

Talking about billions is disingenuous, you should be talking about percentages of revenue. Ten billion _sounds_ like a lot but really isn't.

For Google, over 80% of their revenue comes from ads.

Apple's revenue is around 380 billion, 5-10 billion in ads is in the "other" category if you draw a pie chart of it... They make 30 billion just selling iPads - their worst selling product.

Apple can lose the ad category completely and they won't even notice it. If Google's ads go away because of privacy protections, the company will die.

Talking absolutes is not accurate either. Not all revenue is equal.

There is reason why NVDIA, TSLA or stocks with growth[1] potential gets the P/E multiple that their peers do not or an blue chip traditional company can only dream of. The core of Apple revenue the biggest % chunk of iPhone sales is stagnant at best falling at worst. Services is their fastest growing revenue segment and already is ~$100 B of the $380B. Ads is a key component of that, 5 years back Ads was less < $1B, that is the important part.

Also margins matter, even at Apple where there is enormous margin for hardware, gross margins for services is going to be higher, that is simple economics of any software service cost of extra user or item is marginal. The $100B is worth lot more than equivalent $100B in iPhone, iPad sales where significant chunk will go to vendors.

Executives are compensated on stock performance, stock valuation depends on expected future growth a lot. Apple's own attempts and the billions invested to get into Auto, Healthcare, or Virtual Reality are a testament to that need to find new streams of revenue.

It would be naive to assume a fast growing business unit does not get outsized influence, any middle manager in a conglomerate would know how true this is.

A Disney theme park executive doing even 5x revenue as say the Disney+ one will not get the same influence, budgets,resources or respect or career paths.

[1] Expected Growth, doesn't have to be real,when it does not materialize then market will correct as is happening to an extent with TSLA.

> Google is an ad company, they have a full model of what you like and dont like at different states of your life built.

Thats not what I was saying. I was saying that Apple extract more information than google does. I was not saying that Apple process it to make a persona out of you. Thats not the issue here. Apple is saying that they are a "Privacy first" company. To be that, you need to not be extracting data in the first place.

Yes, they make lots of noise about how they do lots of things on device. Thats great and to be encouraged. But Apple are still extracting your friend list, precise location, financial records, various biometrics, browsing and app history. ANd for sure, they need some of that data to provide services.

But whats the data life cycle? are they deleting it on time? who has access to it, what about when a new product wants to use it? how do they stop internal bad actors?

All I want you to do is imagine that Facebook has made iOS, and the iphone, and is now rolling out these features. They are saying the same things as Apple, do you trust them?

Do you believe what they say?

I don't want Apple to fail, I just want people to think critically about a very very large for profit company. Apple is not our friend, and we shouldn't be treating them like they are.

I think what he's getting at is that Apple does collect a lot of very similar data about it's users. Apple Maps still collects data about where you've driven - the difference is that they don't turn around and sell that data like Google loves to do.

I believe (but could be wrong) they also treat that data in a way that prevents it from being accessed by anyone besides the user (see: https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...

Can you explain what you mean with "extract more information from you than google" here?

Not saying you're wrong, I'm just curious what sources or info you're using to make that claim.

on iOS Apple record:

o who you message, when you message.

o Your locations (find my devices)

o your voice (siri)

o the location of your items (airtags)

o what you look at (App telemetry)

o What websites you visit (Safari telemetry)

o what you buy (Apple Pay)

o Who your with (location services, again)

o your facial biometrics (apple photos tags people with similar faces, something FAcebook got fined for)

o Who emails you, who you email

With these changes, you'll need to allow apple to process the contents of the messages that you send and receive. If you read their secuirity blog it has a lot of noise about E2E security, then admit that its not practical for things other than backups and messaging.

they then say they will strive to make userdata ephemeral in the apple private cloud.

I'm not saying that they will abuse it, I'm just saying that we should give apple the same level of scrutiny that we give people like Facebook.

Infact, personally I think we should use Facebook as the shitty stick to test data use for everyone.

What do you mean by the “record”? It seems like you think this means Apple somehow has access and stores all that information in their cloud and we just have to hope/trust that they don’t decide they want to poke around in it?

You should look more into their security architecture if you’re curious about stuff like this. The way Secure Enclave, E2EE (including the Advanced Data Protection feature for all iCloud data), etc. The reality is that they use a huge range of privacy enhancing approaches to minimize what data has to leave your device and how it can be used. For example the biometrics you mention are never outside the Secure Enclave in the chip on your phone and nobody except you can access them unless they have your passcode. Things like running facial recognition on your photos library is handled locally on your device with no information going up to the cloud. FindMy is also architected in a fully E2E encrypted way.

You can browse their hundreds of pages of security and privacy documentation via the table of contents here to look up any specific service or functionality you want to know more about: https://support.apple.com/guide/security/welcome/web

by record, I mean precisely that. Apple stores this data. As the Key bearer it has significant control.

Moreover, because apple has great PR, you don't hear about privacy breeches. Everyone seems to forget they made a super cheap and for a long time undetectable stalking service. Despite the warnings. (AirTag)

Had that been Facebook or Google, it would have been the end of the feature. They have improved the unauthorised tracking flow, but its really quite unreliable with ios, and really bad in android still.

> You should look more into their security architecture if you’re curious about stuff like this.

I have, and its a brilliant manifesto. I especially love the documentation on PCC.

but, its crammed full of implied actions that aren't the case For example: https://support.apple.com/en-gb/108756

> If you choose to enable Advanced Data Protection, the majority of your iCloud data – including iCloud Backup, Photos, Notes and more – is protected using end-to-end encryption.

Ok good, so its not much different to normal right?

> When you turn on Advanced Data Protection, access to your iCloud data on the web at iCloud.com is disabled

Which leads me to this:

> It seems like you think this means Apple somehow has access and stores all that information in their cloud and we just have to hope/trust that they don’t decide they want to poke around in it?

You're damn right I do. Its the same with Google, and Facebook. We have no real way of verifying that trust. People trust Apple, because they are great at PR. But are they actually good at privacy? We have no real way of finding out, because they also have really reactive lawyers.

and thats my point, we are basically here: https://www.reddit.com/r/comics/comments/11gxpcu/our_little_... but with apple.

So, if I've got this correct there's:

1. On-device AI

2. AI using Apple's servers

3. AI using ChatGPT/OpenAI's services (and others in the future)

Number 1 will pass to number 2 if it thinks it requires the extra processing power, but number 3 will only be invoked with explicit user permission.

[Edit: As pointed out below, other providers will be coming eventually.]

I see no real difference between 2 and 3. Once the data has left your device, it has left your device. There is no getting it back and you no longer have any control over it.
> I see no real difference between 2 and 3.

This #2, so-called "Private Cloud Compute", is not the same as iCloud. And certainly not the same as sending queries to OpenAI.

Quoting:

“With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests. These models run on servers powered by Apple silicon, providing a foundation that allows Apple to ensure that data is never retained or exposed.“

“Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.”

“Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.”

"We make the hardware and we pinky promise that we will protect your data and will open source part of it" means nothing for privacy. Especially when things like warrants come into play.
How would a warrant work in this case? Using Silicon Data Protection[0] the hash of the currently running firmware (of both the AP _and_ the SEP) is locked into hardware registers in the PKA engine used by the SEP. This hash perturbs the key derivation, and the PKA engine can also attest to the running firmware hash(es) by using an EC key only available to it (they call this BAA, Basic Attestation Authority).

iOS won't send any data to a PCC that isn't running a firmware that's been made public in their transparency logs and compute nodes have no way to be debugged in a way that exposes user data[1]

And at the end of the day, this is going to give the warrant holder a handful of requests from a specific user? Why wouldn't they use that same warrant to get onto the target's device directly and get that same data plus a ton more?

0: https://help.apple.com/pdf/security/en_US/apple-platform-sec... 1: https://security.apple.com/blog/private-cloud-compute/

Apple controls the hardware and the private keys baked into the hardware. If one of their servers can decrypt the payload, they can intercept, duplicate, and decrypt the payload and its response. I'm sure this'll start a long fight between law enforcement and Apple after the first warrant hits and Apple claims it can't comply.

Warrants to hack devices are a lot less common and generally harder to obtain. That's why police will send Google warrants for "give us info on every device who has been in a radius of x between y and z time".

I'm sure Apple did their very best to protect their users, but I don't think their very best is good enough to warrant this kind of trust. A "secure cloud" solution will also tempt future projects to use the cloud over local processing more, as cloud processing is now readily available. Apple's local processing is a major advantage over the competition but I doubt that'll stay that way if their cloud solution remains this integrated.

Apple actually does not control the private keys baked into the hardware, see the "Root Cryptographic Keys" section of their security whitepaper: https://help.apple.com/pdf/security/en_US/apple-platform-sec...

Your example indicates a situation where law enforcement does not know which device belongs to their suspect, if they even have one. That's a very different scenario from a targeted "tell us the requests belonging to this individual".

Warrants to search a device are extremely common place, otherwise the likes of Grayshift and Cellebrite would not be around.

From a threat modeling perspective compromising PCC is high risk (Apple's not just going to comply and the fight will be very public, see the FBI San Bernardino fight) , high effort (Long protracted court case), low reward (I only see requests that are going to get shipped off to the cloud). If I were law enforcement I'd explore every other avenue available to me before I go down that particular rabbit hole which is exactly what this design is intended to achieve.

If we can't trust independent audits of code and hardware, what can we trust?
I think it boils down to that it doesn't matter what they promise, if you send a videocap of all you ever do on your computer to some company on the internet, you just have to take your chances. Would you put mics and cameras in all of your rooms in your home that send data to Apple (or someone else) to analyze "for your benefit" even if they say and promise they won't do anything bad with the feeds?

At least with gmail and chat clients etc. things are somewhat put in compartments, one of the services might screw up and do something with your emails but your Messenger or WhatsApp chats are not affected by that, or vice versa. But when you bake it into the OS (laptop or phone) you're IMHO taking a much bigger risk, no matter what the intentions are.

There is nothing which Apple Intelligence can do that a hypothetically evil Apple couldn't have done before, given sufficiently treacherous code in their operating systems. Thus if you use an Apple device, you're already trusting Apple to not betray you. These new features don't increase the number of entities one must place their trust in.

Whereas with apps like Gmail and WhatsApp on an iPhone, you must trust Google and Meta in addition to Apple, not in place of Apple. It doesn't distribute who you trust, it multiplies it.

I still think it's a big difference between trusting existing OS'es and apps, which are under scrutiny by hundreds of security researchers and thousands of security nerds all the time, and willingly sending away all your data to a party who promises they will treat it well (I know it doesn't work like this in this case, but just for the sake of argument).

In essence, what you're doing is training an assistant to learn all of your details of your life and habits and the question is if that "assistant" is really secure forever. Taken to the extreme, the assistant becomes a sort of "backup" of yourself. But yeah it's an individual decision with the pro's and con's of this.

If you were already in on iCloud, that one residual distinction is moot.
  • ENGNR
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Open code, inspected and used by a large number of users, hosted on hardware you physically control
  • ryr11
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think that's fair, but impractical for most users. I have a number of Home Assistant integrations with locally hosted AI models for smart home features, but I wouldn't expect my grandma to set up a server and a few VMs when she could just give her HomePod a prompt that works with AI and have no worries about the implementation. Do you feel like Apple's "independent" auditing is insufficient?
  • ENGNR
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Do you feel like Apple's "independent" auditing is insufficient?

Yeah, pretty much

Also, your grandma might not setup a VM, but it sounds like the off-device processing is essentially stateless, or at most might have a very lightweight session. It seems like the kind of thing one person could setup for their family (with the same tamper-proof signatures, plus physical security), or provide a privacy focused appliance for anyone to just plug into a wall, if they wanted to.

Most open source code isn't inspected though.

There have been many cases recently of compromised code being in the wild for quite some time and then only known about by accident.

100% of closed code is not inspected
I have been involved in security audits for 110% closed code, code that's secret even within the company.

Auditing helps the company writing it, the auditors are usually experts in breaking stuff in fun ways, and it's good for business - we could slap "code security audited by XXX" on the sales pitch.

> and it's good for business - we could slap "code security audited by XXX" on the sales pitch.

You're on the precipice of discovering the problem of incentives when it comes to audits.

Audits are good, but they're inferior to source available

By you. Three comments above references "independent audits". Meaning professional cybersecurity firms
From. what I can tell, Apple doesn't actually provide the source code itself, or provides the (cryptographically verified) binaries and VMs to run it. Reverse engineering will still need to take place, it seems.
I will trust independent audits of local code and local hardware. There are still plenty of opportunities for someone to send out malicious patches, but the code running can (and probably will) be analysed by journalists looking for a scoop and security researchers looking for a bug bounty.

I have no idea what code is running on a server I can't access. I can't exactly go SSH into siri.apple.com and match checksums. Knowing Apple's control freak attitude, I very much doubt any researcher permitted to look at their servers is going to be very independent either.

Apple is just as privacy friendly as ChatGPT or Gemini. That's not necessarily a bad thing! AI requires feeding lots of data into the cloud, that's how it works. Trying to sell their service as anything more than that is disingenuous, though.

> I have no idea what code is running on a server I can't access.

That's like... the whole point? You have some kind of hardware-based measured boot thing that can provide a cryptographic attestation that the code it's running is the same as the code that's been reviewed by an independent auditor. If the auditor confirms that the data isn't being stored, just processed and thrown away, that's almost as good as on-device compute for 99.999% of users. (On-device compute can also be backdoored, so you have to trust this even in the case that everything is local.)

The presentation was fairly detail-light so I don't know if this is actually what they're doing, but it's nice to see some effort in this direction.

E: I roughly agree with this comment (https://news.ycombinator.com/item?id=40638740) later down the thread -- what exactly the auditors are verifying is the key important bit.

I do like Apple's attempts to make this stuff better for privacy, but a pinky promise not to leak any information is still just that.

Apple has developed some of the strongest anti tampering compute on existence to prevent people from running code they don't want on hardware they produce. However, that protection is pretty useless when it comes to protection from Apple. They have the means to bypass any layer of protection they've built into their hardware.

It all depends on what kind of auditing Apple will allow. If Apple allows anyone to run this stuff on any Mac, with source or at least symbols available, I'll give it the benefit of the doubt. If Apple comes up with NDAs and limited access, I won't trust them at all.

Exactly, Apple has barely any oversight or accountability for their privacy claims. Sad to see so many people taking their word at face value.
Isn’t this basically what Signal does? Legitimately asking; I thought parts of their server implementation were closed source.
I don't think so. Signal regularly stops committing code to the public repos (https://github.com/signalapp/Signal-Server) when they're working on some kind of big reveal (cryptocurrency integration and such), but the server code is out there for you to run yourself.

Signal has the added benefit that it doesn't need to read what's in the messages you send. It needs some very basic routing information and the rest can be encrypted end to end. With AI stuff, the contents need to be decrypted in the cloud, so the end-to-end protections don't apply.

I meant more regarding their server setup but now that I think about it you are correct, it matters a lot more if the query/message/whatever isn’t encrypted before hitting the cloud.
  • dudus
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It's a big mental gymnastics to do the same as Google and Microsoft while claiming moral superiority.

Apple's thrown stones come back to hunt their glass ceiling.

Eh with modern processor features like secure enclaves it's definitely possible to build systems in which the operators CANNOT access the information. (I worked on such a system using SGX for a large car producer, even physical access to the machines/hypervisors/raw memory would not give you access, perhaps the nsa has some keys baked in to extract a session key you may generate inside an enclave, but it would be very surprising if they burned that backdoor on anything as low fruit as this).
SGX has been broken by speculative execution bugs, though. Had something to do with people extracting DRM keys, if I recall correctly, not exactly a nation state attack. Since then, SGX has been removed from modern Intel processors (breaking some Blurays and software products for newer chips in the process).

Secure enclave stuff can be used to build a trust relationship if it's designed well, but Apple is the party hosting the service and the one burning the private keys into the chip.

Yep, it was broken a few times but fixed with microcode patches (afaik). It's still a part of the server processors and in wide use already. I'm not saying it's a golden bullet or otherwise infallable, but it sure beats cat /dev/mem by quite some way.
  • bpye
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If you produce the hardware you necessarily have access to the signing key to say update the microcode or the firmware. Intel is in the TCB for SGX, but your cloud operator wouldn’t be. In this case Apple is both the hardware manufacturer and the operator.
Yes, that's all well and good but assumes no mistakes and no National Security letters ordering them to describe it that way and no changes of control or business strategy at some point in the future.

Once the data is out of your possession it's out of your control.

There are VERY few things that can keep your information safe if a TLA wants it. You need to go full Edward Snowden with phones in faraday cages and typing passwords under a sheet -levels of paranoia to be fully safe.

Drow "nation state is after me" from the threat model and you'll be a lot happier.

  • ENGNR
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
True, but there's a difference between

- TLA agency deploys scarce zero days or field ops because you're particularly interesting, vs..

- TLA agency has everything about you in a dragnet, and low level cop in small town searches your data for a laugh because they know you, and leaks it back to your social circle or uses it for commercial advantage

> nation state ... threat model

The history of tech is the history of falling costs with mass production. Expensive TLA surveillance tech for nation states can become broadly accessible, e.g. through-wall WiFi radar sensing sold to millions via IEEE 802.11bf WiFi 7 Sensing in NPU/AI PCs [1], or USB implant cables [2] with a few zeros lopped off the TLA price.

Instead of adversary motives, threat models can be based on adversary costs.

As adversary costs fall, threat models need to evolve.

[1] https://www.technologyreview.com/2024/02/27/1088154/wifi-sen...

[2] https://shop.hak5.org/products/omg-cable

> Once the data is out of your possession it's out of your control.

Actually, once your e2e key that encrypts your data is out of your possession, it's out of your control.

Over the past decade it's become commercially feasible to be NSL-proof.

Not everyone has nation states in their threat models. I want privacy from corporations / surveillance capitalism, not the US government. Apple's privacy promises are focused on keeping my data out of the hands of bad actors like Google etc. and that's more than enough for me.
A threat model of Google getting your email revision makes these statements sillier. TLS and existing privacy policies are sufficient.
That's too many words with surprisingly little meaning. I'd suggest to wait for more technical details and to treat this as marketing until then.
You can read more here: https://security.apple.com/blog/private-cloud-compute/

But in summary 1. The servers run on Apple Silicon hardware which have fancier security features 2. Software is open source 3. iOS verifies that the server is actually running that open source software before talking to it 4. This is insane privacy for AI

The security features are meant to prevent the server operator (Apple) from being able to access data that's being processed in their farm. The idea is that with that + E2E encryption, it should be way closer to on-device processing in terms of privacy and security

Thanks! That's great and sounds like they're really trying to go as far as possible with it.

Here's also a great summary from Matthew Green: https://x.com/matthew_d_green/status/1800291897245835616

You do realise that already happens though? If you read apple's privacy policy they send a lot of what you do to their servers.

Furthermore how private do you think Siri is? Their privacy policy explicitly states they send transcripts of what you say to them. That cannot be disabled.

That's the problem. These AI features may be "free" but is there an option to disable them system wide from rummaging through all your data and building a profile in order to be helpful? If not I won't update. And I mean one tickbox not a separate switch for every app and feature like siri has making it nearly impossible to disable
> Furthermore how private do you think Siri is? Their privacy policy explicitly states they send transcripts of what you say to them. That cannot be disabled.

Ten minutes ago i set up a new Apple device and it not only asked me if I wanted to enable Siri, but whether I wanted to contribute audio clips to improve it. What, exactly, cannot be disabled?

You can trivially find it in the Settings app after setup, too: Privacy & Security -> Analytics & Improvements -> scroll to the Improve Siri & Dictation toggle that explains that it controls whether Apple can store and review audio of interactions with Siri and the dictation function. Plenty of other options to review in the vicinity too, since the first party privacy settings are basically all in the same place.
That is the option for the audio itself. The transcripts of the audio (you do know what transcripts are, right?) are always sent to apple as per their privacy policy.

"When you use Siri, your device will indicate in Siri Settings if the things you say are processed on your device and not sent to Siri servers. Otherwise, your voice inputs are sent to and processed on Siri servers. In all cases, transcripts of your interactions will be sent to Apple to process your requests."

It's pretty clear and not in dispute that your transcripts are always sent to Apple.

That’s because Siri doesn’t run on-device - phones like the iPhone 6 can’t run that level of analysis. They “collect transcripts” insofar as they need to process your request.

Nonetheless, Siri is trivial to disable altogether.

Yeah if you asked the average person on the sheet (e.g. you) if they thought Siri was 100% private, they'd say yes because Apple has mislead them and said it is. That's the point. Apple says everything is private but then secretly collects data via their privacy policies.
  • coob
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Most people are happy for (2) already - iCloud Photos, Device backups, iCloud Messages… email.

Those that won’t use those won’t use this either.

Certainly there's a difference. You are right that the jump is big between 1 and 2, but it is negligent to say that Apple, a company which strives for improved privacy and security, and ChatGPT have the same privacy practices.
No, that's not the point. The point is neither of those companies could have the same values you have for your data and you are then leaving the security of that data in the hands of someone else. Even Apple, who is better than most, values your privacy with a dollar value representing your custom and their reputation. That is not how I value (nor most people value) their data. The latter point applies to any company, regardless of intention because security breaches are a matter of when, not if, and if anyone says otherwise they should not be talking about security.
Apple has demonstrated to be relatively trustworthy about privacy while most AI companies have demonstrated the opposite, so I do see a significant difference.
Google was considered very cool and trustworthy at one point also. "Do no evil" and all that.
Google was cool, once upon a time, but they always used your personal info pretty openly. The CEO a himself famously said, “The Google policy on a lot of things is to get right up to the creepy line and not cross it.”

Apple has taken a markedly different approach, and has done so for years - E2E encryption, hashing and segmenting routes on maps, Secure Enclave, etc.

While I think it’s perfectly reasonable to “trust no one”, and I fully agree that there may be things we don’t know, I don’t think there it’s reasonable to put Apple on the same (exceedingly low) level as Google.

No they never were, they were "do no evil" but at the exact same time everyone knew they were an advertising company and most people in the field could see where it was heading eventually, or at least i'd hope.

Apples motives are different, selling premium hardware and MORE premium hardware, they wouldn't dare fuck that up, their nestegg is hardware and slowly more services tied to said hardware ecosystem (icloud subs, tv subs etc). Hence the privacy makes sense to pull people into the ecosystem.

Google... everything google does even phones, is for more data gathering for their advertising revenue.

Google's entire buisness model was built on hoovering up and selling access to user data in the form of AdSense. Without that data, their business falls apart.

Apple's business model is to entice people into a walled garden ecosystem where they buy lots of expensive hardware sold on high margins. They don't need user data to make this work, which is why they can more comfortably push features like end-to-end and no-knowledge encryption.

#2 is publicaly auditable, 100% apple controlled and apple hardware servers, tied to your personal session (probably via the ondevice encryption), i'd imagine ephemeral docker containers or something similar for requests that just run for each request or some form of Encrypted AI Lambdas.
The difference is Apple and OpenAI's privacy policies.
I think that is completely fair.

I think also a bunch of people will trust Apple’s server more (but not completely) than other third parties.

Hopefully they have some toggle in settings for this.
Lvl 3 is supposed to support other models and providers in the future too. I hope it will support every server with simple, standard API so I can run self-hosted LLama 3 (or whatever will be released in next 6-12 months).
Or Groq. They can do 1250 tokens/s with Llama 3 8B.
It sounded like 3 is meant for non-personal stuff. Basically like a search engine style feature. When you want to look up things like say sports records and info, or a movie and info about it, etc.
The problem is they don't explicitly define when 1 can pass to 2 and whether we can fully and categorically disable it. As far as I know, 1 can pass to 2 when governments ask for some personal data or when Apple's ad model needs some intimate details for personalization.
The information provided for level two is end to end encrypted and not stored so the risk level is pretty low here.
Ent-to-end encrypted means that the other end (Apple/NSA) has access to it.
Imagine the memory on their server is encrypted with an on-processor key (something like intel SGX) -- reading OS memory, e.g dumping from linux or hardware, you can't read it unless you somehow extract the key (which are different on each chip) from the physical chip. Now, the process running using that encrypted memory generates TLS keys for you to send the data, and operates on it only inside this secure enclave.

There is no way to access it without destroying the chip, and even in this scenario it will be extremely expensive and imo unlikely, certainly impossible at scale. Some scientists may be able to do it once in a lab.

BTW there is an entire industry popping up around exactly this sort of use case, it's called 'confidential computing' and CNCF have some software in the works (confidential containers iirc). I'm pretty excited to see what risc-v is going to bring to the party enclave wise.
Ok, I'm imagining.

Now, is any of that actually true?

It does need to process the data. The server has no persistent storage and no remote shell. It is a limited and locked down special-purpose iOS.
Maybe, maybe not. I would like to purchase one of those servers and put it in my rack, so I can monitor all network traffic. Is that an option?
That was my sense as well. I would have appreciated some clarification on where the line between 1 and 2 was, although I am sure a YouTuber will deep dive on it as soon as they have it in their hands
I'm skeptical of the on-device AI. They crave edge compute but I'm doubtful their chips can handle a 7B param model. Maybe ironically with Microsoft's phi 3 mini 4k you can run this stuff on a cpu but today it's no where near good enough.
I don't know how they are going to square the privacy circle when at worst its a RAG based firehose to OpenAI, and at best you can just ask the model to leak your personal info.
Said this in the other thread, but I am really bothered that image generation is a thing but also that it got as much attention as it did.

I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

That being said, the polish and actual usefulness of these features is really interesting. It may not have some of the flashiest things being thrown around but the things shown are actually useful things.

Glad that ChatGPT is optional each time Siri thinks it would be useful.

My only big question is, can I disable any online component and what does that mean if something can't be processed locally?

I also have to wonder, given their talk about the servers running the same chips. Is it just that the models can't run locally or is it possibly context related? I am not seeing anything if it is entire features or just some requests.

I wonder if that implies that over time different hardware will run different levels of requests locally vs the cloud.

Regarding image generation, it seems the Image Playground supports three styles: Animation, Illustration, or Sketch.

Notice what's missing? A photorealistic style.

It seems like a good move on their part. I'm not that wild about the cartoon-ification of everything with more memes and more emojis, but at least it's obviously made-up; this is oriented toward "fun" stuff. A lot of kids will like it. Adults, too.

There's still going to be controversy because people will still generate things in really poor taste, but it lowers the stakes.

I noticed that too, but my conclusion is that they probably hand-picked every image and description in their training data so that the final model doesn’t even know what the poor taste stuff is.
  • 0xfae
  • ·
  • 1 week ago
  • ·
  • [ - ]
Exactly. And, I'm assuming, it will reject any prompts or words in poor taste.
> I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

I think it shows the context for the information it presents. Like the messages, events and other stuff. So you can quickly check if the answer is correct. So it's more about semantic search, but with a more flexible text describing the result.

> I wonder if that implies that over time different hardware will run different levels of requests locally vs the cloud.

I bet that’s going to be the case. I think they added the servers as a stop-gap out of necessity, but what they see as the ideal situation is the time when they can turn those off because all devices they sell have been able to run everything locally for X amount of time.

I’m betting 100% on this. And I think the new sidecar controlling your phone is an example of where they’re going in reverse.

If you have a M6 MacBook/ipad pro it’ll run your AI queries there if you’re on the same network in two-four years.

> I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

I am worried at the infinite ability of teenagers to hack around the guardrails and generate some probably not safe for school images for the next 2 years while apple figures out how to get them under control.

They hid the workaround for this - it’s going to be available in US english first, and then other locations over the coming year.

This can be never. LLMs fail fast as you move away from high resourced languages.

This seems really cool.

They said the models can scale to "private cloud compute" based on Apple Silicon which will be ensured by your device to run "publicly verifiable software" in order to guarantee no misuse of your data.

I wonder if their server-side code will be open-source? That'd be positively surprising. Curious to see how this evolves.

Anyway, overall looks really really cool. If it works as marketed, then it will be an easy "shut up and take my money". Siri seems to finally be becoming what it was meant to be (I wonder if they're piggy-backing on top of the Shortcuts Actions catalogue to have a wide array of possible actions right away), and the image and emoji generation features that integrate with Apple Photos and other parts of the system look _really_ cool.

It seems like it will require M1+ on Macs/iPads, or an iPhone 15 Pro.

You don't even have to buy a new device since it's backwards compatible with A17 Pro and M1, M2, M3 and M4. It feels like the integration of the services are using existing models and integrating the API used traditionally originally from AppleScript but, extending it to LLM or stable diffusion systems. It seems that they want the M4 as soon as possible though for the gaming and cloud pushes.
For those curious, there is in fact a ChatGPT integration.

The way it works is that when the on-device model decides "this could better be answered by chatgpt" then it will ask you if it should use that. They described it in a way which seems to indicate that it will be pluggable for other models too over time. Notably, ChatGPT 4o will be available for free without creating an OpenAI account.

I don't think that 4o will actually be available for free. It seemed like they were quite careful in choosing their words. My guess is 3.5 is free without an account, and accessing 4o requires linking your OpenAI account.
They only mentioned 4o, but they mentioned it explicitly at the start, well before they mentioned one can also tie in to their openAI account, if you have one, at the end of the presentation.

To me that implies 4o by default, but I guess we'll find out.

  • kokon
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Apple must be paying OpenAI a pretty penny to give the best LLM tokens away for free.
It seems no money is exchanging hands!
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I'm really curious about this. Framing it as "running a large language model in the cloud" is almost burying the lede for me. Is this saying that in general the client will be able to cryptographically ascertain somehow the code that the server is running? That sounds incredibly interesting and useful outside of this.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It seems like this is an orchestration layer that runs on Apple Silicon, given that ChatGPT integration looks like an API call from that. It's not clear to me what is being computed on the "private cloud compute"?
If I understand correctly there's three things here:

- on-device models, which will power any tasks it's able to, including summarisation and conversation with Siri

- private compute models (still controlled by apple), for when it wants to do something bigger, that requires more compute

- external LLM APIs (only chatgpt for now), for when the above decide that it would be better for the given prompt, but always asks the user for confirmation

The second point makes sense. It gives Apple optionality to cut off the external LLMs at a later date if they want to. I wonder what % of requests will be handled by the private cloud models vs. local. I would imagine TTS and ASR is local for latency reasons. Natural language classifiers would certainly run on-device. I wonder if summarization and rewriting will though - those are more complex and definitely benefit from larger models.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The "Do you want me to use ChatGPT to do that?" aspect of it feels clunky as hell and very un-Apple. It's an old saw, but I have to say Steve Jobs would be rolling over in his grave at that. Honestly confused as to why that's there at all. Could they not come up with a sufficiently cohesive integration? Is that to say the rest ISN'T powered by ChatGPT? What's even the difference? From a user perspective that feels really confusing.
  • dmix
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I thought it was the smartest and most pragmatic thing they've announced.

Being best in class for on-device AI is a huge market opportunity. Trying to do it all would be dumb like launching Safari without a google search homepage partnership.

Apple can focus on what they are good at which is on device stuff and blending AI into their whole UX across the platform, without compromising privacy. And then taking advantage of a market leader for anything requiring large external server farms and data being sent across the wire for internet access, like AI search queries.

I think they also announced the possibility to integrate Siri with other AI platforms than ChatGPT so this prompt would be especially useful to make clear to the user which of these AIs Siri wants to use.
From a user perspective it's 100% clear.

If the system doesn't say "I'm gonna phone a friend to get an answer for this", it's going to stay either 100% local or at worst 100% within Apple Intelligence, which is audited to be completely private.

So if you're asking for a recipe for banana bread, going to ChatGPT is fine. Sending more personal information might not be.

I just don't think the average user cares enough to want this extra friction. It's like if every time you ran a google search it gave you lower-quality results and you had to click a "Yes, give me the better content" option every time to get it to then display the proper results. It's just an extra step which people are going to get sick of very fast.

You know what it's really reminiscent of? The EU cookies legislation. Do you like clicking "Yes I accept cookies" every single time you go to a new website? It enhances your privacy, after all.

  • IMTDb
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
In theory there isn't. In practice > 99% of the website I visit have a cookie banner thingy. Including the EU own website (https://european-union.europa.eu/index_en).

Think about it: even a government agency isn't able to produce a simple static web page without having to display that cookie banner. If their definitions of "bad cookies that require a banner" is so wide that even they can't work around it to correctly inform citizens, without collecting any private data, displaying any ad or reselling anything; maybe the definition is wrong.

For all intent and purposes, there is a cookie banner law.

They could not have a cookie banner, but their privacy policy states pretty clearly why they want your consent. It is to "gather analytics data (about user behaviour)". Additionally you don't need to consent to this and can access everything without them "collecting any private data, displaying any ad or reselling anything". The only reason they ask for consent is to gather analytics, which is similar to you being asked for your postal code when paying while shopping.
The cookie banners are a cargo cult.

Someone somewhere figured out that it might be a good idea and others just copied it.

It's interesting you phrase it that way, because that's sort of how DuckDuckGo works with their !g searches. I'm not saying that's good or bad, it's just an observation.
Still involves friction. A more "seamless" way for Apple to do this would've been to license GPT-4's weights from OpenAI and run it on Apple Intelligence servers.
  • asadm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
but that restricts it to just openai then.

I want to use perplexity from siri too!

  • fckgw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It's a clear delineation between "My data is on my device or within Apple's ecosystem" and "My data is now leaving Apple and going to a 3rd party"
  • 0xCMP
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
At the core of everything they presented is privacy. Yes the point is that most questions are answered locally or via the Private Compute system.

More specifically "is openai seeing my personal data or questions?" A: "No, unless you say it's okay to talk to OpenAI everything happens either on your iPhone or in Private Compute"

Apple is touting the privacy focus of their AI work, and going out to ChatGPT breaks that. I would be reluctant to use any of their new AI features if it weren't for that prompt breaking the flow and making it clear when they are getting results from ChatGPT.
  • dag11
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What? The original Siri asked if the user wanted to continue their search on the web if it couldn't handle it locally. It was one of the last things from the Jobs era.
I agree. Quite odd and not very Apple-ish. I wonder if there’s some good reason for it; it must have been debated internally.
They'll probably add an option to disable that prompt at some point. I'm glad it is the default behavior, though.
This seems really cool.

They said the models can scale to "private cloud compute" based on Apple Silicon which will be ensured by your device to run "publicly verifiable software" in order to guarantee no misuse of your data.

I wonder if their server-side code will be open-source? That'd be positively surprising. Curious to see how this evolves.

Anyway, overall looks really really cool. If it works as marketed, then it will be an easy "shut up and take my money". Siri seems to finally be becoming what it was meant to be (I wonder if they're piggy-backing on top of the Shortcuts Actions catalogue to have a wide array of possible actions right away), and the image and emoji generation features that integrate with Apple Photos and other parts of the system look _really_ cool.

It seems like it will require M1+ on Macs/iPads, or an iPhone 15 Pro.

  • TillE
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> I wonder if their server-side code will be open-source

No, but they said it'll be available for audit by independent experts.

I don't understand why people act like this is a new way of working. Hundreds of ISO certifications require independent audit. Functionally this can be done in many ways, like source code access by human reviewers, or static scanning with signed results. What's important is not who looks, be it PwC, Deloitte, or industry peers. It's important whats being looked for, and what standards are being followed.
How do we sign up to be an independent expert? We need about 50,000 eyeballs on this at all times.
How many independent eyeballs are on Gemini's servers or OpenAI's?
  • ENGNR
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
They're not making the privacy claim
  • ru552
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
~It seems like it will require M1+ on Macs/iPads, or an iPhone 15 Pro.

They specifically stated it required iPhone 15 Pro or higher and anything with a m1 or higher.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • htrp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Apple Intelligence is free for users, and will be available in beta as part of iOS 18, iPadOS 18, and macOS Sequoia this fall in U.S. English. Some features, software platforms, and additional languages will come over the course of the next year. Apple Intelligence will be available on iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac with M1 and later, with Siri and device language set to U.S. English. For more information, visit apple.com/apple-intelligence.

iphone 15 Pro 8 GB RAM (https://www.gsmarena.com/apple_iphone_15_pro-12557.php)

iphone 15 6 GB Ram (https://www.gsmarena.com/apple_iphone_15-12559.php)

Along with a 2GB RAM difference, they have different processors (A17 vs A16).

https://en.wikipedia.org/wiki/Apple_A17

Per the comparison table on that page, the "Neural Engine" has double the performance in the A17 compared to the A16, which could be the critical differentiator.

That's a good reason not to upgrade my iPhone 13!
  • dudus
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
English only? That is surprising
  • c1sc0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The platform talk had a bit more architectural details and it looks like they heavily optimize / compress the Foundation model to run for specific tasks on-device. I'm guessing that sticking to US English allows them to compress the foundation model further?
As long as they don't geolock it to "english speaking" countries, I'm fine with that.
  • TillE
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
As far as I'm aware, the only time Apple has implemented that kind of restriction is with their DMA compliance. Like, I used the US App Store (with a US credit card) while physically in Europe for many years.
And Apple Fitness and Apple News.

I can follow workout instructions in english, as can my kids. But Apple has decided that Apple One is more shit over here for some reason.

I am quite disappointed that 14 pro is not supported. So much power, but they decided to not support any of the older chips.
  • hbn
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The 15 Pro's SoC has an extra 2GB of RAM which could very well be make-or-break for running a local model which tends to be very memory-constrained
it's about 15 having 2x more powerful neural engine
It’s a matter of RAM most likely - models require crazy amounts of ram, and I bet they had to struggle to fit them on 15’s pro 8GB.
Happy as long as there is a switch to toggle it all off somewhere. I find very little of this useful. Maybe someone does, and that’s great!

And my concern isn’t from a privacy perspective, just a “I want less things cluttering my screen” perspective.

So far though it looks like it’s decent at being opt-in in nature. So that’s all good.

My thoughts exactly, as someone who manages 145 iPhones for a health-care org, all of this stuff needs to be completely blockable and granularly manageable in Mobile Device Management or things could go very, very wrong compliance-wise.
Strong agree here. Features are cool, but I value screen real estate and simplicity. Plus, the gpt app works fine for me. I don’t need it built into other things yet.
I feel like this is actually the thing you want when you say "less things cluttering my screen".

Siri can now be that assistant, that summarises or does things, that would instead make you go through various screens or apps. Feels like it rescues clutter, not increases it to me imo

I simply cannot agree, but again, it's a personal thing. I never ever find voice interfaces useful though...

Aside: When the presenter showed the demo of her asking Siri to figure out the airport arrival time and then gloat it "would have taken minutes" to do on her own... I sat there and just felt so so strongly that I don't want to optimize out every possible opportunity to think or work out a problem in my life for the sake of "keeping on top of my inbox".

I understand value of the tools. But I think overall nothing about them feels very worth showing even more menus for me to tick through to make the magic statistical model spit out the tone of words I want... when I could have just sat there and thought about my words and the actual, real, human person I'm talking to, and rephrase my email by hand.

> I don't want to optimize out every possible opportunity to think or work out a problem in my life for the sake of "keeping on top of my inbox"

Completely agree. My first thought on seeing this stuff is that it suggests we, as an industry, have failed to create software that fulfils users’ needs, given we’re effectively talking about using another computer to automate using our computers.

My second thought is that it’s only a matter of time before AI starts pushing profitable interests just like seemingly all other software does. How long before you ask some AI tool how to achieve something and it starts pitching you on CloudService+ for only 4.99 per month?

It's actually taking LESS screen space, because "Siri" is now just a glowing edge on your screen.

And good news! You can clear your homescreen too fully from all icons now =)

  • jl6
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I can see people using Rewrite all the time. In the grim darkness of the AI future, your friends speak only in language that is clean, sanitized, HR-approved, and soulless.
At work, yes. However, it won't be long until the language you speak will become a feature of your ML driven consumer language service. There will likely be products that reflect your style/ identity/ whatever. And once you reach a certain socioeconomic level, you'll speak a highly customized bespoke dialect that reflects your station in life, just like today but much, much weirder…
That perfectly describes how I feel about all of this.

I'm sure that there will be lots of genuinely useful things that come out of this AI explosion, but can't help but be a bit saddened by what we're losing along the way.

Of course I can choose not to use all of these tools and choose to spend time with folks of a similar mindset. But in the working world it is going to be increasingly impossible to avoid entirely.

Young people already seem bothered by how pristine/flawless modern photography looks and seem increasingly obsessed with using film cameras/camcorders to be more authentic or whatever pleasing attribute they find in that media. I think they'll respond with more misspellings and sloppier writing to appear more authentic
As one of these young people, you're way overestimating the popularity of these trends. There are always some "we gotta go back"-centered communities lingering in the background, but digital vs analogue photography isn't even a close match-up. People who want to get more into photography are far more likely to buy a good digital camera than a film camera.
  • TillE
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I feel like this is an awful feature for your native language, but fantastically exciting for a second language where you're not quite fluent and need to be able to write coherently.
  • glial
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
People already use words like 'product', 'content', 'feature', and 'vehicle' in everyday conversation. It makes me shudder every time.
Few thoughts:

It seems like this is what Rabbit's LAM was supposed to be. It is interesting to see it work, and I wonder how it will work in practice. I'm not sold on using voice for interacting with things still.

Image Generation is gross, I really didn't want this. I am not excited to start seeing how many horrible AI images I'm going to get sent.

I like Semantic Search in my photos.

This does seem like the typical Apple polish.I think this might be the first main stream application of Gen AI that I can see catching on.

I like that they finally brought typing interaction to Siri. You won't always need to use voice.

This does look like a real-world implementation of the concept promoted by Rabbit. Apple already had the App Intents API mechanism in place to give them the hooks into the apps. They have also publish articles about their Ferret UI LLM that can look at an app's UI and figure out how to interact with it, if there are no available intents. This is pretty exciting.

Text as a Siri interface has been available for a while now. Long-press the sleep/wake button to raise the prompt.
It has, but it’s presently an accessibility affordance you have to enable first. It’s found under device Settings > Accessibility > Siri > “Type to Siri” On/Off
Oh, go figure. Let that be a lesson: if you don't check out the accessibility options, you're missing at least half the cool stuff your phone can do to actually make your life easier.
I wonder how they will extend this to business processes that are not in their training set.

At https://openadapt.ai we rely on users to demonstrate tasks, then have the model analyze these demonstrations in order to automate them. The goal is similar to Rabbit's "teach mode", except it's desktop only and open source.

I had similar reactions, a couple add-ons to make:

1. Yes, App Intents feel like the best version of a LAM we'll ever get. With each developer motivated to log their own actions for Siri to hook into, it seems like a solid experience.

2. Image Gen - yeah, they're pretty nasty, BUT their focus on a "emoji generator" is great. Whatever model they made is surprisingly good at that. It's really niche but really fun. The lifelessness of the generations doesn't matter so much here.

3. Polish - there's so much polish, I'm amazed. Across the board, they've given the "Intelligence" features a unique and impressive look.

  • nsbk
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This on-device, private cloud compute, and tight hardware-software integration take may be first useful for all genAI we’ve seen so far.

Apple may have actually nailed this one.

Edit: except for image generation. That one sucks

> Edit: except for image generation. That one sucks

Say more? it's just the media thing about intellectual property rights?

  • c1sc0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I asked up thread and people apparently have concerns about the IP and think that AI images often lack taste.
it's fascinating to see this resurgence of people who now like copyright/IP law
I don't know why it's that fascinating. Lots of people think some amount of copyright is reasonable but that life of the author + 70 years is far too long.
Meanwhile Adobe is in hot water for generating images in the style of Ansel Adams... https://www.theverge.com/2024/6/3/24170285/adobe-stock-ansel...
Adobe didn't generate them. Someone uploaded them to their stock photos site.

Nor was their issue that the images were in the style of Ansel Adams, but rather that they used his name. That's not a copyright issue. It's a trademark one.

Exactly, nuance is important. Short term protections are good, but companies shouldn’t be able to keep works from entering public domain for decades on end.
  • nsbk
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I was referring to the limitations of the feature which can only generate images in three pre-canned cartoony styles
I wonder if those are to avoid two of the big image generation controversies:

1. Imitation of artists' styles (Make an image in the style of...). The restricted styles are pretty generic, so harder to pin down as being a copy of or imitation of some artist.

2. It's cartoony, which avoids photorealistic but fake images being generated of real people.

  • dmix
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Native smartphone integration was always going to be the most important UI for genAI.

Followed by maybe search engines once it gets to a certain level of quality (which we seem to be a bit far from).

Then either desktop or home(alexa).

Apple trying to rebrand AI = "Apple Intelligence" is a totally Apple thing to do.

I'll be curious to see if Apple gets caught with some surprise or scandal from unexpected behavior in their generative models. I expect that the rollout will be smoother than Bing or Google's first attempts, but I don't think even Apple will prove capable of identifying and mitigating all possible edge cases.

I noticed during the livestream that the re-write feature generated some pretty bad text, with all the hallmarks of gen AI content. That's not a good sign for this feature, especially considering they thought it was good enough to include in the presentation.

It's bad branding because they can't use the "AI" abbreviation. It's too commonly used to be appropriated by Apple. Honestly calling it "Apple Intelligence" just feels a little lazy.
> the re-write feature generated some pretty bad text, with all the hallmarks of gen AI content

This is the curse of small-language models. They are better suited for constrained output like categorization. Using them for email generation takes... well, that takes courage.

Thankfully, there is an option to use GPT-4o for many of the text generation tasks.

I noticed this too. I quickly skimmed a rewritten email and it was totally wooden. It's the one where they ask the AI to reword their rant that includes tons of all-caps words.

The output is almost worse, dripping in a passive aggressive tone.

It's good enough for the typical Apple consumer.
Please don't sneer, including at the rest of the community.

- HN Guidelines.

I have only one question: can I turn it off?
You can turn Siri off, so I wouldn’t be surprised if this is the same: a toggle on by default that they present you when upgrading the OS. Perhaps even just the same toggle for Siri controls all of this as a whole.
Turn off what part and why? They announced several new systems, much of which runs on-device, one of which is simply an improved Siri. I was surprised by how considerate they seemed about AI data privacy, even for Apple.
> Turn off what part and why?

Assume any part, and assume none of your business.

It's not reasonable to expect to be able to turn off "any part" of a piece of software, unless it's open source and you're digging through the code yourself to remove sections of it, refactor and re-compile everything.

That said, Apple generally gives people very fine-grained controls over what software features they want enabled, at least compared to other closed-source software vendors.

My question "what part and why" was intended to open up a discussion about privacy in regards to Apple's AI. But if your answer is simply "none of your business", then my answer to the question "can I turn it off" is simply "nobody has any way of knowing yet." Neither of those answers are great discussion openers.

Your username seems to check out.

I don't want any part of my personal data (what I write, what I photograph, what I record, what I jot down) to be viewed by anything by my own eyes or the encryption algorithm converting it to ciphertext to send across a secure channel WITHOUT MY CONSENT.

Period.

Reason: again, nobody's business.

If you don't get this then, a) you're not in a high-risk group for discrimination, or b) you've never been subjected systemic polices designed to keep you "in line".

I do get this. I don't know why you'd assume I don't.

And the sentiment behind your comment seems very reasonable, reading past its non-sequitur tone.

I do wonder if my privacy awareness has a connection with the fact that I lived the first 13 years of my life under an eastern block dictatorship ...

However in this case I'm also concerned about needless power consumption. Especially on battery.

In most cases the one using most power in a modern smartphone is the display.

And knowing Apple, the RAG-stuff will be done overnight when the phone is charging, not during use.

It's one thing to have private information at rest, another to have it indexed, and interpreted by a LLM. What if some virus orders the LLM to search for blackmail material and email it to them? The very act of putting a LLM near your data is a security concern. If someone else orders your Siri to reveal something, it can get to the prize in seconds, with AI help.
A virus can use its own LLM, so I guess you don't want indexing at all. Makes it hard to find stuff.
It did sound like it would be opt-in. I think the current iteration of Siri already is, so it would make sense if they kept it that way.
Microsoft's recall is going to have that feature, according to the latest updates on the matter. I hope apple won't lag behind on implementing this one.
Which part? The online or offline capabilities?
Nope. I afraid AI future is mandatory ;)
You're an apple user, you decided a long time ago that they know what's best for you
I almost shed a tear, then I remembered the alternative is Google...
Apple has decided to allow users to disable various features, so the question is, do they let you disable this.
Why? Things are secure (outside of the explicit OpenAI calls via Siri) and mostly seem subtly integrated. You don't have to use each feature, but why blindly disable all AI having not even tried it?
I've tried a number of these things and I honestly don't see the value in them. I have to double check everything they do and it takes longer to describe what I want and double check everything then just to do it myself.

I'll be disabling everything I can. I don't use Siri or anything of that sort as well.

"It will automatically find a picture of your drivers license, read the number and add it to your text"

This is scary stuff that should not be happening on anything that is closed-source and unaudited publicly. The pervasiveness of surveillance it enables is astounding.

How is it any more dangerous than having a picture of your ID on your phone? It uses a local model for finding and extracting data, and confirms before autofill.

Should we start auditing wallets next? People's driver licenses are sitting insecure and unencrypted in their pockets! Anyone could grab it!

Security is important, but being alarmist toward thoughtful progress hurts everyone.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What's different about this from the current implementation of searching photos for 'driver's license' and it pulling up pictures of your license? iOS has already been using "AI" image recognition for years on your photos.
Yes, this is an extension of that feature and a further integration with other enhancements. Apple has been doing “machine learning” for years for features like this. Now they are starting to bring those features together using other models like LLMs.
Why take up processor time or memory for a feature I don't want? Or the increase in threat space?
This could be said for literally every single feature of a smartphone, down to the out of order execution of the CPU
Yes. I would like to be able to disable other smartphone features I don't use. But that's already the case. Like the GPS, for instance, is disabled unless I'm using the map. And even that can be set to "never" if I want.
I also have animations toned down from the accessibility settings, yes :)
That's exactly why I wait extra long to install updates.
Not wanting to send that much data to Apple's server no matter the pinky promise they make about caring for our data? That's a legit ask.
Apple's? You mean OpenAI's...
Stuff that I don't use can get in the way.
I am pretty unhappy with Apple doing the image generation, was really hoping that just would not happen.

But a lot of the other features actually seem useful without feeling shoehorned in. At least so far.

I am hoping that I can turn off the ability to use a server while keeping local processing, but curious what that would actually look like. Would it just say "sorry can't do that" or something? Is it that there is too much context and it can't happen locally or entire features that only work in the cloud?

Edit:

OK how they handle the ChatGPT integration I am happy with. Asks me each time if I want to do it.

However... using recipe generation as an example... is a choice.

  • c1sc0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What’s wrong with the image generation?
I think there's still a myriad of concerns around the ethics of using others uncredited images to power models that aim to disenfranchise artists.

but my biggest concern is that I think they look tacky, and putting it right in the messaging apps is gonna be ... irritating.

Emoji, Memoji, stickers, now gen images. Can’t wait to start receiving them from my dad and my mother-in-law in the most absurd of contexts. Like honestly, I like how much the older relatives enjoy weird, tacky stuff like this.
Tech people make fun of tacky stuff like this, but it's a big driver for regular people to upgrade quickly.
People have ethical concerns about all the public images that were scraped. Regardless of whether or not we agree with them, it is a pretty popular stance to take.
Most of the generated images shown were terrible but my kid is going to love that emoji generator.
It was Google who wanted to put glue on pizza, not ChatGPT :)
I feel like I remember there being plenty of examples of bad ChatGPT recipe generation.

Regardless, even if it wasn't ChatGPT, given the recent problems I would not have used that as one example given that regardless of who it came from.

Am I only person who's reached their threshold on companies forcing and shoving AI into every layer and corner of our lives?

I don't even look at this stuff any more and see the upside to any of it. AI went from, "This is kinda cool and quaint." to "You NEED this in every single aspect of your life, whether you want it or not." AI has become so pervasive and intrusive, I stopped seeing the benefits of any of this.

I feel like this WWDC kind of solidified that these corporations really don't know what to do with AI or aren't creative enough. Apple presented much better AI features that weren't called AI than the "summarize my email" and "generate an ugly airbrushed picture you buy at the mall kiosk to send to your mom".

All of these "make your life easier" features really show that no tech is making our lives simpler. Task creation is maybe easier but task completion doesn't seem to be in the cards. "Hey siri, summarize my daughters play and let me know when it is and how to get there" shows there's something fundamentally missing in the way we're living.

I'm resistant, too. I think from a number of reasons:

- So far, the quality has been very hit or miss, versus places where I intentionally invoke generative AI.

- I'm not ready to relinquish my critical thinking to AI, both from a general perspective, and also because it's developed by big companies who may have different values and interests than me.

- It feels like they're trying to get me to "just take a taste", like a bunch of pushers.

- I just want more/better of the right type of features, not a bunch of inscrutable magic.

  • wilg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The new generative AI stuff has been barely implemented in most products, I don't know how you are experiencing it as pervasive and intrusive. Are you sure you're not just cynical from all flood of negative news stories about AI?
this being a news thread about Apple integrating AI into all their operating systems and apps aside... Chrome has started prompting me to use generative AI in text boxes. Twitter (X) has an entire tab for Grok that it keeps giving me popup ads for. Every single productivity suite (Notion, Monday, Jira) are perpetually prompting me to summarize my issue with AI. Github has banner ads for Copilot. It is everywhere.
Summarization was implemented everywhere because it was the easiest AI feature to ship when a VP screamed "Do AI, so our C-suite can tell investors we're an AI company!"
Summarization is damn useful, though. It has solved clickbait and TLDR-spam, now you can always know if something is worth watching/reading before you do.
Agreed, the dehyping of article titles is one of the main reasons I built hackyournews.com, and the avoidance of clickbait via proactive summarization is consistently rewarding.
AI doesn't have to be intrusive but this "personal assistant" stuff, which is what they're marketing to the general public at the moment, certainly is.
Are you sure you’re not optimistic just bcuz you stand to materially benefit from widespread adoption of chatgpt wrappers?
  • wilg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
How would I materially benefit?
Currently, AI use has a "power user" requirement. You have to spend a lot of time with it to know what it is and is not capable of, how to access those hidden capabilities, and be very creative at applying it in your daily life.

It's not unlike the first spreadsheets. Sure, they will some day benefit the entire finance department, but at the beginning only people who loved technology for the sake of technology learned enough about them to make them useful in daily life.

Apple has always been great at broadening the audience of who could use personal computing. We will see if it works with AI.

I think it remains to be seen how broadly useful the current gen of AI tech can be, and who it can be useful for. We are in early days, and what emerges in 5-10 years as the answer is obvious to almost no one right now.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
You're in for a ride.

This barely scratches the surface on how much AI integration there's going to be in the typical life of someone in the 2030s.

  • pndy
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Am I only person who's reached their threshold on companies forcing and shoving AI into every layer and corner of our lives?

After a random update my bank's app has received AI assistant out of blue to supposedly help their clients.

At first I was interested how these algorithms could enhance apps and services but now, this does indeed feels like shoving AI everywhere it's possible even if it doesn't makes any sense; as if companies are trying to shake a rattle over your baby's cradle to entertain it.

Aside above, I was hoping that after this WWDC Siri would get more languages so I could finally give it instructions in my native language and make it actually more useful. But instead there are generated emoticons coming (I wonder if people even remember that word). I guess chasing the hottest trends seems more important for Apple.

They are not making it mandatory to use, just widely available through various interfaces. I see this closer to how spellcheck was rolled out in word processors, then editors, then browsers, etc.
if I can't turn 100% of this botshit off then my iphone's going in the bin

I'll go back to a dumbphone before I feed the AI

You’re not feeding anything by having this feature turned on
I have zero confidence in any privacy or contractual guarantees being respected by the parasitic OpenAI
  • ru552
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
you have to acknowledge a pop up authorizing your request be sent to OpenAI every single time it happens. it's not going to happen by mistake.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
And they’re parasitic how exactly? Even if they do collect every single of my prompts the benefit of chatGPT outweighs my data being sold
Right. This thread on the other hand ...
I have curtailed my internet commenting considerably in the last 12 months

it is now almost exclusively anti-AI, which funnily enough I don't mind them training on

This ramping up AI war will leave no prisoners. I am not an Apple customer in any way, I am in Google's ecosystem, but I feel that I need to make an exit, at least some essentials, preferably this year.

My e-mail, my documents, my photos, my browsing, my movement. The first step for me was setting up Syncthing and it was much smoother than I initially thought. Many steps to go.

I haven’t adopted passcodes, and moved all my email out of gmail to a private domain. Photos backup t to my NAS. I’m terrified of the automated systems deciding I’m a bad actor.

I can’t help but think it’ll get worse with AI

Not that you shouldn't do it, but too much of an active effort or obsession with not using standard e-mail services or photo back ups is probably a faster way to get flagged as suspicious lol
For things that don’t leave your system it’s ok, but the moment you send something to others it will go into the systems that you try to avoid anyway.

Mostly I see no point in things like email self hosting if half my contacts are on Gmail and the other half on Microsoft.

My suggestion (as someone that tried to escape for some time) is to build a private system for yourself (using private OS and networks) and use a common system to interface with everyone else.

There was one part that I didn't understand about AI compute: For certain tasks, server side compute will be done as on-device chip is not powerful enough I suppose. How does this ensure privacy in verifiable manner? How do you CONFIRM that your data is not shared when cloud computing is involved with AI tasks?
Your data is being shared. But they've shown that it is being done in a way where only required data leaves the devices and there are some protections in place which try to minimize misuse of the data (the OS will only communicate with publicly signed versions of the server for example). The call to Apples "Private Compute Cloud" is intransparent to the user, ChatGPT calls need permission if I understood it correctly.
So it is not really private then.
I think it's a semantic thing at this point. If for you private can't mean plaintext living on a computer you don't control then no. If it's private in the way your iCloud photos are private then yes, and seemingly more so.
What does private mean? If I store my children's photos on iCloud encrypted, that's not private?
> the OS will only communicate with publicly signed versions of the server for example

This hardly increases security, and does not increase privacy at all. If anything it provides Apple with an excuse that they will throw at you when you ask "why can't I configure my iOS device to use my servers instead of yours?" , which is one of the few ways to actually increase privacy.

This type of BS should be enough to realize that all this talk of "privacy" is just for the show, but alas...

Before you write off their claims I encourage you to read more about the detailed specifics (if you have the technical footing and inclination to do so). While the approaches should certainly be probed and audited, it’s clearly more than performative. https://security.apple.com/blog/private-cloud-compute/

Also a good thread from Matthew Green, a privacy/cryptography wonk, on the subject: https://x.com/matthew_d_green/status/1800291897245835616?s=4...

Can you configure a Google phone to use your servers instead of theirs for Google Assistant requests?
I don't know what your argument was going to be if I said "no", but in any case, the answer is yes, you can. You can even entirelly uninstall Google Assistant and replace it with your own software, and you do not lose any functionality of the device nor require access to private hooks to do that. I do that myself.
> Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests. ChatGPT’s data-use policies apply for users who choose to connect their account.

> ChatGPT will come to iOS 18, iPadOS 18, and macOS Sequoia later this year, powered by GPT-4o.

> and OpenAI won’t store requests.

What's a promise from Sam Altman worth, again?

That's not a "promise from Sam Altman", that's a contractual term between Apple, Inc. and OpenAI, LLC.

So I think it's worth as much as Apple is willing to spend enforcing it, which I imagine would be quite a bit.

Idk about Sam Altman in particular, but OpenAI pulled the bait-and-switch you can still see in its name. We don't know what the contract says exactly, but there are always loopholes, and I would not assume anything OpenAI says to be in good faith.

I also don't really care, but it's understandable why some people do.

> that's a contractual term between Apple, Inc. and OpenAI, LLC.

do you have a source on this or are you just assuming?

Do you think this is all running off the standard openai API and they picked a dev at random in Apple to use their accounts API keys?

Of course there is some agreement…

It would be a very surprising business arrangement if that was not explicitly called out. Apple is not going to leave this to chance.
> Apple is not going to leave this to chance.

How much would you be willing to bet, on a statement like this? I love a sporting chance.

If we find out in the next 12 months that OpenAI has been storing requests from Apple/Siri AND Apple doesn't come down on them with a 10 ton lawyer hammer, I'll pay you $500.

Can you match it the other way around? :)

crickets from OP
I will bet around $10,000 FWIW.
Even if the promise were made in good faith, I fear it may be hard to resist pressure from law enforcement etc.
If Apple is sitting in the middle proxying the IP addresses, and not keeping any logs for longer than they absolutely need to, law enforcement could go pee up a rope, right?
You'd hope so, but corporate resistance against domestic intelligence has a bumpy track record: https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...
That's just the enterprise guarantee. The same applies to Azure OpenAI services and the API services provided by OpenAI directly.
  • wilg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What broken Sam Altman promises are you referring to?
  • avtar
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Personally I would say the disparity between what was in their founding agreement "be open-source—freely and publicly available for anyone to use, modify and distribute" https://archive.ph/R0LBL to the current state of affairs.

But I guess the list of grievances could be longer:

https://garymarcus.substack.com/p/what-should-we-learn-from-...

  • zoky
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Leaving OpenAI, for one.
The promise is from Apple, not OpenAI, and likely contractual.

If OpenAI actually went against that, Apple would unleash the mother of all lawsuits.

Tim Cook doesn't seem to mind hanging his reputation on sama's promise, so at least that's something
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"let's store responses and a hash of the request intent in a kvp then"
Please, an encrypted key-value store. The private key is only shared between you, Apple, and relevant law-enforcement agencies. It's as private as you can ask for, these days!
Storing the response and *a GPT-summarized request* would not violate the spirit or letter of the statement here, either.
I am deeply disturbed they decided to go off-device for these services to work. This is a terrible precedent, seemingly inconsistent with their previous philosophies and likely a pressured decision. I don't care if they put the word "private" in there or have an endless amount of "expert" audits. What a shame.
They didn't have a choice. Doing everything on-device would result in a horrible user experience. They might as well not participate in this generative AI rush at all if they hoped to keep it on-device. Which would have looked even worse for them.

Their brand is equally about creativity as it is about privacy. They wouldn't chop off one arm to keep the other, but that's what you're suggesting they should have done.

And yes, I know generative AI could be seen specifically as anti-creativity, but I personally don't think it is. It can help one be creative.

I don't think it would've looked bad for their brand to have not participated. Apple successfully avoided other memes like touchscreens laptops and folding phones.
Siri is bad and is bad for their brand. This is making up for that ground.
> Doing everything on-device would result in a horrible user experience. They might as well not participate in this generative AI rush at all if they hoped to keep it on-device.

On the contrary, I'm shocked over the last few months how "on device" on a Macbook Pro or Mac Studio competes plausibly with last year's early GPT-4, leveraging Llama 3 70b or Qwen2 72b.

There are surprisingly few things you "need" 128GB of so-called "unified RAM" for, but with M-series processors and the memory bandwidth, this is a use case that shines.

From this thread covering performance of llama.cpp on Apple Silicon M-series …

https://github.com/ggerganov/llama.cpp/discussions/4167

"Buy as much memory as you can afford would be my bottom line!"

Yes - but people don't want to pay $4k for a phone with 128GB of unified memory, do they?

And whilst the LLM's running locally are cool, they're still pretty damn slow compared to Chat-GPT, or Meta's LLM.

Depending on what you want to do though.

If I want some help coding or ideas about playlists, Gemini and ChatGPT are fine.

But when I'm writing a novel about an assassin with an AI assistant and the public model keeps admonishing me that killing people is bad and he should seek help for his tendencies, it's a LOT faster to just use an uncensored local LLM.

Or when I want to create some people faces for my RPG campaign and the online generator keeps telling me my 200+ word prompt is VERBOTEN. And finally I figure out that "nude lipstick" is somehow bad.

Again, it's faster to feed all this to a local model and just get it done overnight than fight against puritanised AIs.

To say nothing of battery life.
  • wilg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
You are deeply disturbed by the idea that some services can be better implemented server-side? Who do you think pressured them, the illuminati?
Here's a shocking suggestion: maybe wait some time before these services could be implemented on-device, and implement them on-device, instead of shipping this half-baked something? Apple seems to be the perfect company to make it happen, they produce both the hardware and the software, tightly integrated with each other. No one else is this good at it.
  • wilg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
They implemented way more on the device than anyone else is doing, and I don't see how it makes it "half-baked" that it sometimes needs to use an online service. Your suggestion is essentially just not shipping the product until some unspecified future time. That offers no utility to anyone.
It is, however, very much Apple's philosophy to wait it out and let others mature a technology before making use of it.
At the current rate of advancement, we might get a runaway AGI before the technology "matures".
Or we might not. LLMs are remarkably dumb and incapable of reasoning or abstract thinking. No amount of iterative improvement on that would lead to an AGI. If we are to ever get an actual AGI, it would need to have a vastly different architecture, at minimum allowing the parameters/weights to be updated at runtime by the model itself.
Right. But there's so much effort, money and reputation invested in various configurations, experimental architectures, etc. that I feel something is likely going to pan out in the coming months, enabling models with more capabilities for less compute.
It offers utility to user privacy.
Here’s a shocking suggestion: if you’re not comfortable using it, don’t use it.
I like their approach. Do everything possible on device and if it can only be done off-device, provide that choice.
You misunderstand.

They will go off device without asking you, they just ask if you want to use ChatGPT.

No: they do on device, ask to do off device in their private cloud. Chatgpt is then a separate integration / intent you ask can for
I don't see anything to that effect in tfa, and a few people in the comments have claimed otherwise.
Yeah that's how it works.
Are they giving us a choice? I thought the choice was primarily about using ChatGPT? It sounded like everything in apples "Private Cloud" was being considered fully private.
  • cedws
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Circa 2013 Snowden says the intelligence agencies are wiretapping everything and monitoring everyone.

In 2024 they don't have to wiretap anything. It's all being sent directly to the cloud. Their job has been done for them.

I hear you but caution against such oversimplification. Advanced Data Protection for iCloud is a thing. Our culture of cloud reliance is truly dangerous, but some vendors are at least trying to E2E data where possible.

There are big risks to having a cloud digital footprint, yet clouds can be used “somewhat securely” with encryption depending on your personal threat model.

Also, it’s not fair to compare clouds to wiretapping. Unless you are implying that Apple’s infrastructure is backdoored without their knowledge? One does not simply walk into an Apple datacenter and retrieve user data without questions asked. Legal process is required, and Apple’s legal team has one the stronger track records of standing up against broad requests.

  • hu3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
iCloud end-to-end encryption is disabled by default.

So by default, user data is not protected.

https://support.apple.com/en-us/102651

Yes, because the UX is better that way.

With ADP if your mom loses her encryption keys, it's all gone. Forever. Permanently.

And of course it's Apple's fault somehow. That's why it's not the default.

Broadly, in the US, the Federal Wiretap Act of 1968 still applies. You're going to have to convince a judge otherwise.

Yes, perhaps broad dragnet type of might be scoffed down by some judges (outside of Patriot act FISA judges ofc)

I would warn you about the general E2E encryption and encrypted at rest claims. They are in-fact correct, but perhaps misleading? At some point, for most, the data does get decrypted server-side - cue the famous ":-)"

It's been going to the cloud since at 2013 as well.
That’s a necessary temporary step until these powerful LLMs are able to run locally. I’m sure Apple would be delighted to offload everything on device if possible and not spend their own money on compute.
They prompt you before you go off-service, which makes the most sense.
They prompt you before they send your data to OpenAI, but it's clear that they prompt you before they send it to Apple's servers (maybe they do and I missed it?). And their promise that their servers are secure because it's all written in Swift is laughable.

Edit:

This line from the keynote is also suspect: "And just like your iPhone , independent experts can inspect the code that runs on the servers to verify this privacy promise.".

First off, do "independent experts" actually have access to closed source iOS code? If so we already have evidence that this is sufficient (https://www.macrumors.com/2024/05/15/ios-17-5-bug-deleted-ph...).

The actual standard for privacy and security is open source software, anything short of that is just marketing buzz. Every company has an incentive to not leak data, but data leaks still happen.

They're promising to go farther than that.

>> Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection. Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.

  • pdpi
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That promise made my ears perk up. If it actually stands up to scrutiny, it's pretty damn cool.
I look at things like that from a revenue/strategic perspective.

If Apple says it, do they have any disincentives to deliver? Not really. Their ad business is still relatively small, and already architected around privacy.

If someone who derives most of their revenue from targeted ads says it? Yes. Implementing it directly negatively impacts their primary revenue stream.

IMHO, the strategic genius of Apple's "privacy" positioning has been that it doesn't matter to them. It might make things more inconvenient technically, but it doesn't impact their revenue model, in stark contrast to their competitors.

Their disincentive to delivering it is that it's not actually possible.
It's certainly possible through remote attestation of software. This is basically DRM on servers (i.e., the data is not decrypted on the server unless the server stack is cryptographically attested to match some trusted configuration).
That requires trusting that the attestation hardware does what it says it does, and that the larger hardware system around it isn't subject to invasion. Those requirements mean that your assurance is no longer entirely cryptographic. And, by the way, Apple apparently plans to be building the hardware.

It could be a very large practical increase in assurance, but it's not what they're saying it is.

I haven't read all the marketing verbage yet, but even 'Our cloud AI servers are hardware-locked and runtime-checked to only run openly auditable software' is a huge step forward, IMHO.

It's a decent minimum bar that other companies should also be aiming for.

Edit: ref https://security.apple.com/blog/private-cloud-compute/

I agree with you about this being a bad precedent.

However, to me, the off-device bit they showed today (user consent on every request) represents a strategic hedge as a $3T company.

They are likely buying time and trying to prevent people from switching to other ecosystems while their teams catch up with the tech and find a way to do this all in the “Apple Way”.

I hope at some point they start selling a beefy Mac mini variant that looks like a HomePod to work as an actual private AI server for the whole family.
  • pram
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This is called a Mac Studio
It would be great if they let us install the private cloud server on our Macs, but I’m not holding my breath. Then again, in the name of more privacy, maybe they want to sell a dedicated local AI hub as another hardware product. They could even offer it for an affordable upfront cost that can be amortized into a multi-year iCloud subscription.
Siri and other assistants already do this no?
  • re
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yes. Siri debuted with the iPhone 4s (running iOS 5) in 2011. It wasn't until iOS 15 in 2021 that Siri gained the ability to do some things without an internet connection, on devices with the A12 Bionic chip (the 2018 iPhone XR/XS or later).
Yes
You can’t charge for a service so easily if it runs on-device.
> Apple sets a new standard for privacy in AI,

That does not necessarily mean better, just different. I reserve judgment until I see how it shakes out.

but if I don't like this feature, and can't turn it off, I guess it's sadly back to Linux on my personal laptops.

It's just Siri, but with better context.

If you don't specifically activate it, it won't do shit.

While I think it's cool and I appreciate Apple crafting better stories on why this is helpful, I still think for the everyday person, they won't really care if it's AI or not.
But Apple's integration means you can use it and not care if it is AI or not. It'll just become part of using iOS (let's face it, that's were the majority of Apple's users will be). From creating a new "genmoji" to any of the other examples of allowing people to do this without know WTF huggingface or the other equally ridiculously named products are. They don't need accounts. They just type a message and decide to put in a new image.

Of course we've only seen examples from an overly produced hype/propaganda video, but it looks to me of yet another example of Apple taking products and making them usable to the masses

  • amne
  • ·
  • 1 week ago
  • ·
  • [ - ]
Watched Tim Cook and MKBHD interview and Tim did say something along the same lines: the average smartphone user doesn't care about the technology branding but what can it do and that's what Apple aims for.

And I agree with their goal here:

Dall-e how much now? it can do what? text to image? can it do emojis? vs Genmoji: oh .. it can do emojis. nice!

same with "Rewrite" and so on.

There's value in OS integration, but again, what are the real use cases? Memoji or whatever doesn't qualify. Apple has added a ton of features in recent years that I haven't used once. If it's going to manage my calendar in a way I can rely on or autocorrect will be smarter, that's useful.
If only they had shown examples of how they are integrating with calendars, emails, texts, photos. You should reach out to Apple's marketing department about producing better release videos that have examples of how the new features will be used. I bet they'd think it was a great idea!
I only read the article, didn't watch the 2hr video, and it's only marketing material. What it really does in my hands is tbd.
> but it looks to me of yet another example of Apple taking products and making them usable to the masses

This is a bit obsequious to Apple. I find it hard to give a cogent argument of how ChatGPT is not "usable to the masses" at this point (and being -used- by the masses).

It doesn't integrate to anything, you need to explicitly give it context every time you ask it something.

You can't just log in to ChatGPT and ask it what was on your calendar 2 weeks ago.

The fact that someone was even making this argument suggest they didn't fully comprehend the presentation or missed some salient details. How anyone could confuse anybody's current integration of AI tools be it chat or generative images into something so central to user's everyday life is beyond me. I would ask for examples of anything else the comes close
  • ru552
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This is exactly what they are going for. You can just ask Siri now "what day did my wife say the recital is?" and Siri spits the answer out without requiring you to go scroll through your messages. Who cares that an LLM did the work?
Agreed! And the UI seemed pretty focused on not really clarifying too much; I think they just mentioned AI a lot since it was WWDC.
If Apple does a really good job of this, then the everyday person probably shouldn't care if it's AI or not.

Who cares how your flight information shows up at the right time in the right place? the only thing that should matter is that it does.

And nobody cares about how absolutely terrifying your statement truly is, because the shiny benefits obfuscate the destruction of privacy, despite Apple's reassurances.
The upsides are obvious and concrete, the downsides are mostly hypotheticals.

People already carry around a device with a GPS, camera, and microphone that has access to most of their intimate and personal communications and finances. Adding AI capabilities doesn't seem like a bridge too far, that's for sure.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Most of Apple's announcements today featured AI but the term wasn't explicitly mentioned. I think the last portion of the keynote that focused on AI was merely for investors tbh
"Semantic Index" sure is a better name than "Recall". Question is whether I can exfiltrate all my personal data in seconds?
I'm sure a simple Webkit vulnerability (there's none of those, ever, right?) will definitely not ensure that Semantic Index is featured in a future pwn2own competition.
  • fmbb
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I mean I can already search my photos for “dog” or “burger” or words in text on photos. Adding an LLM to chat about it is just a new interface is it not?
I think the important thing is that the semantic index tracks all you do through all your apps.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
They are likely implemented very differently. I’m not certain but I imagine the current photos app uses an image model to detect and label objects which you can search against. I expect Semantic Index (by virtue of the name) to be a vector store of embeddings.
It's all in the "private cloud". "Trust me bro", it's like totally private, only us and a handful of governments can read it.
Yeah. It's going to be great. Selected experts are saying so.
Wouldn't this reduce sales for Grammerly? If Apple packs the same feature for every application in iOS, it is kinda cool.

Private Cloud - Isn't this what Amazon did with their tablet - Fire? What is the difference with Apple Private Cloud?

> Wouldn't this reduce sales for Grammerly?

https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...

> Sherlocked as a term

> The phenomenon of Apple releasing a feature that supplants or obviates third-party software is so well known that being Sherlocked has become an accepted term used within the Mac and iOS developer community.[2][3][4]

1Password too.
Well, the Passwords app is just the Passwords section in Settings moved out into its own app. It already exists on Windows, too, but maybe they are updating it to allow autofill without using a Chrome extension or add other features. It isn't the biggest change, just bringing attention to an existing feature that already competes with 1Password et al.
  • cueo
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It would be good if they add support for third-party browsers. Bitwarden (or other apps) can feel clunky sometimes compared to Keychain / Passwords.
Which browser are you using?

I switched to Apple Passwords and have been using the official Chrome extension for a few months. It's not as seamless as some of the password manager extensions, but has been working well enough.

https://chromewebstore.google.com/detail/icloud-passwords/pe...

After years and years of annoying ads, Grammarly taking a hit is the least they deserve
Agreed! I'm happy to not have to hear about them anymore.
  • xnx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Grammarly is great example of the classic adage, "a feature, not a product".
TBH I'd say the same about Notion.
> Wouldn't this reduce sales for Grammerly?

There's a term for that, it's called being Sherlocked: https://www.howtogeek.com/297651/what-does-it-mean-when-a-co...

How many use Grammerly on a Mac exclusively? My guess is that most of their accounts are students through schools and companies. But yeah, there is a risk in any business that a better competitor comes along.
It jumped out to me that I had to highlight and ask it to check my grammar, rather than have it be an automatic process.

I don't use Grammarly, really, but I think at least that one is more automatic?

I would not bet on Grammerly's future.
I am excited to try Siri with this technology enabled. I can't really remember a time when siri ever really worked, although recently I actually got her to play a song on youtube for me after a few attempts and was pretty pleased with that. Outside of "set my alarm for 4:30" kind of stuff, she's never really been that useful, and if you are even kind of disabled, this feature can be really useful to the point of life changing if it is done properly.
They did a lot of work for this release, and the number of integrations is beyond what I expected. In a few years time you might not need to hold your phone at all and just get everything done with voice - kind of cool, actually.

Auto transcripts for calls (with permission) is another feature I really liked.

I was a little surprised to see/hear no mention of inaccuracies, but for ChatGPT they did show the "Check for important facts" notice.

There is a lot less fodder for inaccuracies if the data and processing are all on your device. A lot of the inaccuracies in Gemini and ChatGPT arise because they are using the web for answers and that is a much less reliable source than your own emails and messages.
That sounds like what Humane is trying to do. But I would honestly hate to do everything by voice and have everyone around me know what I’m doing and hear everyone around me talk to their phones all the time. Sounds like a nightmare
> That sounds like what Humane is trying to do.

The only thing Humane was trying to do was scam their investors. Let us never speak of them again.

I would expect it to be situational. I also was happy to see that they introduced a typed interface to Siri so you can do this without speaking.
  • amne
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
So far the only reasonable place I can think of where I could find myself actually using voice to control anything is on the toilet. that's it
  • runeb
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Walking, cycling, running, driving, relaxing on the couch
  • amne
  • ·
  • 1 week ago
  • ·
  • [ - ]
This is a large numbers problem that is not yet visible. You can't have everybody walking and talking. There would be too much noise (think crickets, cicadas, toads, etc.)

I'm not going into cycling/running and talking .. that's just not how things work when you need to breathe.

Driving and talking to a phone to then have it recite back to you 10 minutes of details you can just glance at but would be dangerous to?

If I'm relaxing on a couch .. i'm using a device. And please don't come back with "play me a chill song" as a fancy use case.

What I'm trying to say is that voice is not it and the only other kind of interaction I'm looking forward to see evolve is neuralink-style. In the sense that it needs to be wireless / non-invasive for mass adoption. That's it.

So the future of computing really is AI agents doing everything:

- siri text chat now on the lock screen

- incoming a billion developer app functions/APIs

- better notifications

- can make network requests

Why even open any other app?

> Why even open any other app?

This was my first thought when I saw Rabbit r1 - will all of us become backend developers just glueing various API between legacy services and LLMs? Today seems like another step in that direction.

The whole world will be headless content. There won't be any web pages, or bank sites, or TV networks. Nobody will be a developer. We'll all just be content authors, like Google Maps Guides basically being unpaid interns checking restaurant data for Google.

You open your phone, it just shovels content. And it does absolutely nothing but optimize on addiction.

No apps, only masters.

The year is 2040. I pick up my iPhone. I ask Siri Pro to be entertained. She makes me a mix of Instagram Reels, TikToks, Youtube Shorts and Netflix trailers, not only handpicked for me, but each of those re-cut and re-edited to match my tastes.

When I ask Siri Pro what I'm doing on the weekend, she plans a dinner with a mix of friends and compatible strangers. Any restaurant is fine: the food is going to be personalized anyway.

I hope 2040 iPhone will be able to not start playing fucking music every time when I sit in a car.

  >> - incoming a billion developer app functions/APIs
That would be cool, but the App Intents API is severely crippled. Only a few hardcoded use cases are supported.

So any _real agent_ which has full access to all Apps can still blow Siri out of the water.

Nobody is realizing this coming singularity.

Your phone won't do anything else. For 99% of people, they pick up their phone, AI will just decide what they want to see. And most will accept it.

Someday everyone in the room will all pick up their phones when they all ring at once. It will be some emotional trigger like a live feed from a school shooting. Everyone in the room will start screaming at the totally different experiences they're being presented. Evil liberals, clueless law enforcement, product placement being shown over the shooter's gun. You'll sit horrified because you returned to a dumbphone to escape.

That will be the reality if this AI assistant stuff isn't checked hard now. AI is getting better at addiction an order of magnitude faster than it's getting better at actual tasks.

  • c1sc0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Not necessarily, that entirely depends on the reward function being used, but I get your point.
The WWDC is still ongoing and the stream can be followed here: https://www.apple.com/apple-events/event-stream/

(Sharing because I had trouble finding it).

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The image generation seems really bad. Very creepy, offputting, uncanny-valley images. And that's the the best cherry-picked examples for marketing.

I'm curious to try some of the Siri integrations - though I hope Siri retains a 'dumb mode' for simple tasks.

I wish there was a way to leverage my M1 Mac to use this on my iPhone Pro 14. Like a private connection between my phone and computer to use the more powerful chip, even if it's limited to when I'm at home on the same Wi-Fi. Latency shouldn't be too bad.

But I think Apple is going to limit iPhones from doing something like that to boost sales of the 15 Pro and the future gens.

Yes, I would love the escalation path to be: on-device -> owned Mac -> "private cloud"
Oh, well, many apps will have a hard time competing with “Apple Intelligence” features. Why bother downloading a third-party app if some feature you want is included by default in the OS?

Better yet, no more dealing with overpriced subscriptions or programs that do not respect user privacy.

Kudos to the Apple software team making useful stuff powered by machine learning and AI!

It was amusing to see the Duolingo product placement when their entire product is just a prompt in ChatGPT.
Amazing how Microsoft, Google and now Apple are racing to 'generate' more and more text and images, and they also race to 'summarize' the now generated texts because everything is just noise. Like an anxious digital beehive.

By the end of the year maybe 1% of the content you interact with will be human made.

Even now in HN maybe 20-30% of the comments are generated by various transformers, but it seems every input box on every OS now has a context aware 'generate' button, so I suspect it will be way more in few months.

The Eternal September is coming. (and by ironic coincidence it might actually be in September haha)

TBH, I think the IT industry is too concentrated at eating itself. We are happily automating our jobs away and such while the other industries basically just sleep through.

I don't want generative AI in my phone. I want someone, or something to book a meeting with my family doctor, the head of my son's future primary school, etc. I don't need AI to do that. I need the other industries (medical/government/education) to wake up and let us automate them.

Do you know that my family doctor ONLY take calls? Like in the 1970s I guess? Do you know it takes hours to reach a government office, and they work maybe 6 hours a day? The whole world is f**ing sleeping, IT people, hey guys, slow down on killing yourselves.

AI is supposed to get rid of the chores, now it leaves us with the chores and take the creative part away. I don't need such AI.

I wonder if Apple ever approached Google about using Gemini as the flagship integration. I say that because during the keynote I kept thinking to myself, this could be the moment that Google realises it needs to stick to what it knows best - Search - and all they have to do is sit back and watch the hype fade away.

But that’s in a perfect world.

Even to this day, post ChatGPT, I still can’t imagine how I would ever use this AI stuff in a way that really makes me want to use it. Maybe I am too simple of a mind?

Maybe the problem is in the way that it is presented. Too much all at once, with too many areas of where and how it can be used. Rewriting emails or changing invitations to be “poems” instead of text is exactly the type of cringe that companies want to push but it’s really just smoke and mirrors.

Companies telling you to use features that you wouldn’t otherwise need. If you look at the email that Apple rewrote in the keynote - the rewritten version was immediately distinguishable as robotic AI slop.

My understanding is that Apple's approach to this integration is adaptable; much like how you would change your browser's search engine, you'll be able to change which external AI model is utilized. ChatGPT, Gemini, Claude, etc.
  • rurp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I don't think the choice of integration really matters for GP's point. Regardless of which model is used, how useful is the ability to rewrite an email in AI Voice really going to be? If I'm struggling over how to word an email there's usually a specific reason for it; maybe I'm trying to word things for a very particular audience or trying to find a concise way to cover something complicated that I have a lot of knowledge of. General purpose language model output wouldn't help at all in those cases.

I'm sure there are usecases for this and the other GenAI features, but they seem more like mildly useful novelties than anything revolutionary.

There's risk to this as well. Making it easier to produce low value slop will probably lead to more of it and could actually make communication worse overall.

TBF I was too harsh in my original comment. I did use ChatGPT to automate away the chore part of the coding (boiler plate for example). But I have a gut feeling that in maybe 5-10 years this is going to replace some junior programmer's job.

My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.

  • nomel
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.

The first company to offer their models for offline use, preferably delivered in shipping container you plug in, with the ability to "fine tune" (or whatever tech) with all their internal stuffs, wins the money of everyone that has security/confidentiality requirements.

Unless the company handles national security, the existing cloud tos and infrastructure fulfill all the legal and practical requirements. Even banks and hospitals use cloud now.
  • nomel
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The context here is running third party LLM, not running arbitrary things in the cloud.

> the existing cloud tos and infrastructure fulfill all the legal and practical requirements

No, because the practical requirements are set by the users, not the TOS. Some companies, for the practical purposes of confidentiality and security, DO NOT want their information on third party servers [1].

Top third party LLM are usually behind an API, with things like retention, in those third party servers, for content policy/legal reasons. On premise, while being able to maintain content policy/legal retention on premise, for any needed retrospection (say after some violation threshold), will allow a bunch of $$$ to use their services.

[1] Companies That Have Banned ChatGPT: https://jaxon.ai/list-of-companies-that-have-banned-chatgpt/

edit: whelp, or go this route, and treat the cloud as completely hostile (which it should be, of course): https://news.ycombinator.com/item?id=40639606

If it can automate a junior away it seems as likely it will just make that junior more capable.

Somebody still needs to make those decisions that it can't make well. And some of those decisions doesn't require seniority.

That’s not what happens.

What happens is if you don’t need junior people, you eliminate the junior people, and just leave the senior people. The senior people then age out, and now you have no senior people either, because you eliminated all the junior people that would normally replace them.

This is exactly what has happened in traditional manufacturing.

> this could be the moment that Google realises it needs to stick to what it knows best - Search

In my mind Google is now a second class search like Bing. Kagi has savagely pwned Google.

> this could be the moment that Google realises it needs to stick to what it knows best - Search

You misspelled "ads"

> AI is supposed to get rid of the chores, now it leaves us with the chores and take the creative part away. I don't need such AI.

You know I hadn't considered that and I think that's very insightful. Thank you

This quote has been circulating recently:

> I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes

Seems kind of silly. Laundry machines and dishwashers exist. The issue with the last mile is more robotics and control engineering than strictly AI. It's getting annoying seeing AI used as an umbrella term for everything related to automation.
That’s not possible yet, moving atoms is much more difficult than moving bits.
I feel like you've hit this industry in the nose without realizing it. How much actual value is the tech industry producing?
Absolutely, doing easier things at massive scale can still have more value than harder things at small scale.

I actually think exactly what should happen is already happening. All the low hanging fruit from software should be completed over the next decades, then the industry will undergo massive consolidation and there will be a large number of skilled people not needed in the industry anymore and they can move onto the next set of low hanging hard(er) problems for humanity.

Given that almost every major thing in the world runs on some kinds of computers running some software - a lot, probably. The fact that we don't have perfect and infallible robots and universal ways to manipulate the unpredictable, chaotic environment that is the real world (see also - self-driving cars) simply doesn't affect a lot of industries.
Bits will move bots, and hopefully do laundry and dishes too.
As for government - depends on a country. In Poland we have an mCitizen (mObywatel) mobile app that allows to handle more things tear by year, and we have internet sites with unified citizen login for most of the other government interactions.

The last time our IRS wanted sth from me, they just e-mailed me, I replied and the issue was solved in 5 minutes.

Oh, and you don’t need any paper ids within the country - driver license, car registration and official citizen id are apps on your phone, and if you don’t have your phone when say police catches you, you give them your data and they check it with their database and with your photo to confirm.

Sounds similar to what the Ukrainian government did with the Diya app (lit. "Act/Action" but also an abbreviation of "the country and me") a few years ago. It's an interesting trend to see Eastern Europe adopt this deep level of internet integration before the countries that pioneered that same internet.
> The last time our IRS wanted sth from me, they just e-mailed me, I replied and the issue was solved in 5 minutes.

Lol, that will never happen in the USA. We have companies like Intuit actively lobbying against making things easy because their entire business is claiming to deal with the complexity for you.

Yeah. Another cool thing is that we have government tax forms that are web/mobile, and it takes like 2 min and 8 clicks to fill them - including login. (for private people)
You don't have to walk to the local government office to get car registration plates anymore? That was always annoying as hell.
That’s one of the remaining bastions, sadly. The pickup I can understand, but the need for signup is weird.

On the upside, they are removing the requirements to change plates when you buy a used car, so there’s that.

In Sweden doctors have a fair bit of automation/systems around them, the sad part is that much of it has been co-opted for more stringent records keeping,etc that's just making doctors unhappy and ballooning administration costs instead of focusing on bringing better care for patients.

In essense, we've saved 50 lives a year by avoiding certain mistakes with better record keeping and killed 5000 since the medical queues are too long due to busy doctors so people don't bother getting help in time.

I have a faint to noticable but persistent back pain. It should be checked out but I do not want to cause bigger pain and mental strain than caused by the back pain by talking to 3-4 persons sending me around and putting me in phone queues weeks apart just to see a doctor sometime in the future - with my embarrassingly low priority issue - making mountains of paperworks bored having too little time to diagnose me (that have the risk of leading to even bigger pile of paperwork). It's a different country, life is all the same.
I completely agree, especially with the taking away the creative part and leaving us with the chores.

Doctors have exams, residencies, and limited licenses to give out to protect their industry. Meanwhile, tech companies will give an engineering job to someone who took a 4 month bootcamp.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • runeb
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I share your frustration on services that won’t let you automate them, but to me that’s precisely what generative AI will let you do. You don’t need an API at the family doctors to have AI automate it for you. It just rings them up and sorts it out at your command. AI is like obtaining an API to anything
AI is skipping software integrations the same way cell phone towers (and Starlink) skipped phone wire deployment.
> Do you know that my family doctor ONLY take calls?

And despite that it's still your family doctor.

I fully agree with your vision. It's obvious once laid out in words and it was a very insightful comment. But the incentives are not there for other industries to automate themselves.

I like a family doctor who only takes calls. Good doctors are responsive or have responsive staff. One time a doctor was locked into booking and communicating via this One Medical app that's a total piece of shit and just made things harder, so I went elsewhere. If someone makes a truly better solution, AI or not, doctors will use it without being forced.

And government offices don't even care to begin with, you have no other choice.

> I don't want generative AI in my phone. I want someone, or something to book a meeting with my family doctor, the head of my son's future primary school, etc. I don't need AI to do that.

If someone can do that more productively with Gen AI, do you care?

  • xnx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Do you know that my family doctor ONLY take calls? ... Do you know it takes hours to reach a government office, and they work maybe 6 hours a day?

Google has a few different features to handle making calls on your behalf and navigating phone menus and holds.

I’ve had some success with google assistant calling restaurants to make reservations, when they are phone only. I expect it’s a matter of time until they can camp on my doctors office. Or call my insurance and pretend to be me.
>some success with google assistant calling

The funny thing is, these auto-callers don't even need to be successful. They just need to become common enough for restaurants and doctors to get annoyed to the point where they finally bring their processes to the 21st century.

I know this wasn’t really your point, but most physicians around me use Epic MyChart, so I can book all that online. I also almost exclusively use email to communicate with our school district, and we’re in a small town.
Social problems are the hard ones, information problems are the easy ones. So the latter are the low-hanging fruit that gets solved first
Ohhhh yes. That's why I was so hyped about Google Duplex or duo?! Never heard of it again....
It's available today, it's just not a product called "Duplex". Android has call screening and "hold my call" and phone menu tree detection. On select Google Maps listings, you can make reservations by clicking a button which will make a phone call in the background to make a reservation.
  • pms
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Great points!

The only thing I'd add: I don't think the responsibility for lack of automation is solely on these other industries. To develop this kind of automation, they need funds and IT experts, but (i) they don't have funds, especially in the US, since they aren't as well funded as IT industry, (ii) for the IT industry this kind of automation is boring, they prefer working on AI.

In my view, the overall issue is that capitalism is prone to herding and hype, and resulting suboptimal collective decision-making.

The world has never cared about what you want. Your life has always revolved around the world. Don't like it, you vs the world. Beat it if you can.
I agree. It's just some rant. Whatever, better bury it under the other comments...
None of these features seem to be coming to Vision Pro, which I think is quite baffling. Arguably it's the device that can use them the most.
Word on the street (someone who was talking to Apple employees at WWDC) is that the Vision Pro doesn’t have enough headroom on the processor for it. It’s driving that sucker really hard just doing its “regular” thing.
baffling indeed- seems like they should be over-investing in AVP right now, not under-investing
  • rdl
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I'm super excited about how the apple private compute cloud stuff works -- I tried to build this using intel TXT (predecessor of SGX) and then SGX, and Intel has fucked up so hard and for so long that I'm excited by any new silicon for this. AWS Nitro is really the direct competition, but having good APIs which let app developers do stuff on-device and in some trustworthy/private cloud in a fairly seamless way might be the key innovation here.
[dead]
  • xnx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Credit where credit is due for co-opting the components of the "AI" acronym.
Agreed. Got to hand it to them that marketing was sharp on the name. Unless, of course, it doesn’t really work as advertised and then every “AI <negative>” search specifically bubbles Apple stories to the top.
It is funny to think about how many Apps have probably built text-generation into their product, just to get it enabled on Apple Devices for free.
It's either "Apple Intelligence" or "Generative Intelligence", not "Artificial Intelligence" and "Generative Models"... so silly to brand common ideas with a small twist.

Basically all your information is sucked into a semantic system, and your apps are accessible to a LLM. All with closed models and trusted auditors.

Also funny how they pretend it's a great breakthrough when Siri was stupid-Siri for so many years and only now is lately coming to the AI party.

I really hope those gen-images won't be used to ridicule and bully other people. I think it's kind of daring to use images of known people without their consent, relying on the idea that you know them.

And it's dawning on me that we are already neck-deep in AI. It's flowing through every app and private information. They obliterate any privacy in this system, for the model.

Would you rather they have jumped onto AI early and LLM-ified Siri years ago?
It's late but better late than...
This looks cool for v1! The only problem I see is most devices don't have much RAM, so local models are small and most requests will go to the servers.

Apple could use it to sell more devices - every new generation can have more RAM = more privacy. People will have real reason to buy a new phone more often.

Apple is starting to anticipate a higher RAM need in their M4+ silicon chips: There are rumors they are including more ram than specified in their entry level computers.

https://forums.macrumors.com/threads/do-m4-ipad-pros-with-8g...

One reason could be future AI models.

I'm not sure if this has been verified independently, but interesting nonetheless and would make sense in an AI era.

For anyone who is technical and wants to play with AI but doesn’t want to use cloud services it’s worth digging into LangChain, CrewAI, OpenDevin. Coupled with Ollama to serve the inference from your local network. You can scratch the AI itch without getting in bed with OpenAI.
> Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.

Technically, the sentence could be read that experts inspect the code, and the client uses TLS and CA's to ensure it's only talking to those Apple servers. But that's pretty much the status quo and uninteresting.

It sounds like they're trying to say that somehow iPhone ensures that it's only talking to a server that's running audited code? That would be absolutely incredible (for more things than just running LLMs), but I can't really imagine how it would be implemented.

> I can't really imagine how it would be implemented.

People do stuff that they claim implements it using trusted, "tamperproof" hardware.

What they're ignoring is that not all of the assurance is "cryptographic". Some of it comes from trusting that hardware. It's particularly annoying for that to get glossed over by a company that proposes to make the hardware.

You can also do it on a small scale using what the crypto types call "secure multiparty computation", but that has enormous performance limitations that would make it useless for any meaningful machine learning.

There is no known solution to remote software attestation that does not depend on trusted hardware.
That's correct. But Apple is not making that clear, and is therefore misrepresenting what assurance can be offered.
I'm so happy about this. Siri has great voice recognition and voice synthesis, but it really struggled with intent, context, and understanding what I wanted it to do.

Combining the existing aspects of Siri with an LLM will, I expect, make it the best voice assistant available.

the natural language tasking for actions between apps is the first thing that's made me excited about anything related to the latest AI craze. if apple can keep it actually private/secure, I'm looking forward to this.
If I can just get Siri to control music in my car I will be happy.

Hey siri play classical work X a randomly selected version starts playing

Hey siri play a different version same version keeps playing

Hey siri play song X some random song that might have one similar keyword in the lyrics starts playing

No play song X I don’t understand

Hey siri play the rangers game do you mean hockey or baseball?

Only one is playing today and I’ve favorited the baseball team and you always ask me this and I always answer baseball I can’t play that audio anyway

>car crashes off of bridge

(All sequences shortened by ~5 fewer tries at different wordings to get Siri to do what I want)

  • blixt
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Did I miss the explanation of how they trained their image generation models? It's brave of a company serving creative professionals to generate creative works with AI. I'm a fan of using generative AI, but I would have expected them to at least say a little about what they trained on to make their diffusion models capable of generating these images.

Other than that, using an LLM to handle cross-app functionality is music to my ears. That said, it's similar to what was originally promised with Siri etc. initially. I do believe this technology can do it good enough to be actually useful though.

  • glial
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I thought it was interesting that the only image generation they support are sketches (that look like a photoshop styling) and goofy 3d cartoons -- not really competition with most creatives.
The privacy conversation was pretty shady, and honestly full of technical holes with pointless misleading distractions
I thought privacy was really well handled for a high-level overview. Basically it seems like anything thing it can't do on device uses ephemeral private compute with no stored data.

Any data sent to 3rd party AI models requests your consent first.

The details will need to emerge on how they live up to this vision, but I think it's the best AI privacy model so far. I only wish they'd go further and release the Apple Intelligence models as open source.

if the servers are so private, why is on-device such a win? here are some irrelevant distractions:

- the cpu arch of the servers

- mentioning that you have to trust vendors not to keep your data, then announcing a cloud architecture where you have to trust them not to keep your data

- pushing the verifiability of the phone image, when all we ever cared about was what they sent to servers

- only "relevant" data is sent, which over time is everything, and since they never give anyone fine-grained control over anything, the llm will quietly determine what's relevant

- the mention that the data is encrypted, which of course it isn't, since they couldn't inference. They mean in flight, which hopefully _everything_ is, so it's irrelevant

it will defer to the server a _lot_, if you just consider the capability they can fit on that phone
  • uoaei
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Considering they spent the first half of that segment throwing shade at people who claim privacy guarantees without any way to verify them, Apple hopefully will provide a very robust verification process.
Like they've done in the past, huh?

They talk about "independent experts" a bit, which I remember being hindered (and sued?) by them rather than supported.

  • uoaei
  • ·
  • 1 week ago
  • ·
  • [ - ]
Yep initiatives like these have a nasty habit of being underfunded and deprioritized.
You can only go into so much detail in that format for that audience. I was happy that they simultaneously posted a much more technical deep dive: https://security.apple.com/blog/private-cloud-compute/
Regarding OpenAI, has Apple in its history ever relied so heavily on a 3rd party for software features?

(TSMC for hardware, but it seems very un-Apple to be so dependent upon someone else for software capabilities like OpenAI)

Google Maps in the early days of iOS?

Anyway it seems like a small subset of Siri queries utilize ChatGPT, the vast majority of functionality is performed either locally or with Apple's cloud apparently.

They were also pretty explicit about planning to support other backend AI providers in the future.

The OpenAI reference came at the end, and it appears it's mostly a fallback... an option, that users must explicitly allow every time. Hardly a dependency. Most of the time, it will be on-device or apple-hosted in "private compute cloud", not connected to OpenAI at all.
They did it with Google Maps and YouTube. They also do this with the search engine used in Safari.

I believe they will just provide an interface in the future to plugin as a Backend AI provider to trusted parties (like the search engine) but will slowly build their own ChatGPT for more and more stuff.

As with Google Maps, my guess is that they will only rely on it long enough to get their own LLM offer up to parity, at which point it might still be there as an option but there will be very little need for users to activate it.

Also, it seems that most of Siri's improved features will still work without it (though perhaps less well in same cases) -- and therefore Apple is not fully dependent on it.

OG iPhone had Google as Maps provider and YouTube both within Apple shells and the branding downplayed in Maps case

That’s the only case I can think of where it’s an external tech you’re making requests to, usually it’s things like Rosetta made out of Apple IIRC but integrated internally

> Rosetta is made out of Apple IIRC but integrated internally

Don’t think that’s right. I think Rosetta was always made inside Apple.

https://en.wikipedia.org/wiki/Rosetta_(software)

Perhaps mixing it up because of Rosetta Stone?

https://en.wikipedia.org/wiki/Rosetta_Stone_(software)

> Transitive is providing the engine used in Apple's Rosetta software, which translates software for its current machines using PowerPC processors so it can run on forthcoming Intel-based Macintoshes. "We've had a long-term relationship with them," Transitive Chief Executive Bob Wiederhold said Tuesday.

https://www.cnet.com/tech/services-and-software/the-brains-b...

Thank you for the correction. For those wanting to know more, the technology was called QuickTransit.

https://en.wikipedia.org/wiki/QuickTransit

Rosetta 2 may have been developed in-house, though. That bit isn’t yet clear.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That might be smarter than we initially give it credit. By leaving the “safer” (read: harder to get wrong) things to their own models and then the more “creative” stuff to an explicit external model, they can shift blame: “Hey, we didn’t made up that information, we explicitly said that was ChatGPT”. I don’t think they’ll say it outright like that. Because they won’t have to.
Maybe I missed something but it doesn't sound like OpenAI is powering any of this except the optional integrations.
Seems like the OpenAI integration is a nice-to-have, but mostly separate from Super-Siri?
OpenAI seems like the last-chance fallback - in which case, they've already done the exact same thing with Siri trying to Google search your request if it couldn't do anything with it.
Google Maps, YouTube, on the original iPhone?
But those were standalone apps.

This AI capability is integrated throughout the entire OS and Apps.

It's now part of the "fabric" of iOS.

Only in response to some classes of requests. They didn’t go into detail about when but they said that the local Siri LLM would evaluate the request and decide if it could be services locally, in their private cloud AI, or would need to use OpenAI. Then it would pop up a requesting asking if you want to send the request to OpenAI. It doesn’t look like that would a particularly common occurrence. Seems like it would be needed for “answerbot” type of requests where live web data is being requested.
  • mholm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The majority of this is local AI with nothing to do with openAI. Only particularly complex requests go to them
Does Microsoft Office in early days of Mac OS count? I guess not.
  • TMWNN
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I don't see why it would not count. Same for Adobe products.
Microsoft Office was released five years after the Macintosh; what are you talking about?
The applications that were later bundled into Office were on the Mac pretty early: 1985 for Word and Excel, and the first PowerPoint in 1987 was Mac-only.
Fair, though the very first Macs came with MacWrite preinstalled.
iCloud uses Google Cloud
I wondered the same, but frankly, what other options are there?
It's going to be easy to substitute in their own LLM behind the API in the future. None of the branding or platform is controlled by OpenAI.
  • xrisk
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It seems that the Apple intelligence stuff will be 15 Pro. Man, I just bought a 15 ~8 months ago. That really sucks.
For real, I’m sure a fair amount of previous processors are able to handle it fine, just a reason for ppl to buy the next phone
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
There has never been a better time to move to Linux. Have you tried Omakub? Manjaro? Mint? Ubuntu 24? These are polished and complete alternatives and your favorite app probably has a Linux build already!
| These are polished and complete alternatives

Are they though?

I just setup ubuntu 24 for my son to play games and it's comparatively a very unpolished experience. I'm being very polite when I say that.

Even as someone who keeps a laptop booted into Linux most of the time, yes there are bumps and rough edges that will be encountered once venturing off the most common path of “internet, video, and word processor box”. It’s much better than it once was but it still has problems and the way that fervent advocates try to sweep them under the rug doesn’t help the situation.
Ubuntu, sadly, is not a good experience for a multitude of reasons outside of Canonical's control, including codec and software licensing restrictions.

Gamers should absolutely be heading towards Nobara Linux (Fedora-based, created by GloriousEggroll of Proton-GE fame). Developers should be trying Omakub. Grandma and Grandpa should be using Linux Mint.

That's interesting - it was Ubuntu 24 that made me feel confident the first time to recommend to non-computer enthusiasts. What about Ubuntu 24 came off unpolished to you, if you don't mind me asking?
Trying Omakub in the next 48-72 hours. I can't wait. It looks like the curated experience I've been looking for.
Microsoft Recall => bad. Apple Recall => good.
Apple does not take screenshots every couple seconds, unlike Microsoft. That's what people were bothered about.
That was merely one aspect of what people were bothered about. The most obvious one.
Two companies who have earned very different reputations over the decades, will elicit rather different reactions when announcing similar features, yes.

I also missed the part of the linked article where it says that my Mac is going to take a screenshot every few seconds and store it for three months.

Yup, this is the fascinating thing to me. Looking forward to some detailed comparisons between the two architectures.
The massive difference here is that Apple Recall is 100% on device. (for the use cases they demoed anyways)

EDIT: Yes, I'm wrong.

  • Foe
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Isn't Microsoft Recall also 100% on device?
Microsoft Recall is completely on-device (or so they say).
It's mostly the screenshots things that get people. Semantic search is ok if the index is properly secured and privacy is a concern. And localized context is ok too (summarizing one web page does not screenshot my whole screen). I believe Microsoft has gone with building the easiest option (recording everything) instead of thinking about better contextual integration.
Those are pretty big If's when you have a webkit or blink-based browser on the same device.
Big partnership for OpenAI. Incredible Apple decided to integrate with a third party like this directly into the OS. This feels like something Apple could have executed well by themselves. I was hoping they weren't going to outsource but I suppose the rumors while they were shopping around were true.

I think this further confirms that they think these AI services are a commodity that they don't feel a need to compete with for the time being.

  • avtar
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> This feels like something Apple could have executed well by themselves. I was hoping they weren't going to outsource

Who is to say they aren't eventually going to replace the OpenAI integration with an in-house solution later down the line? Apple Maps was released in 2012, before that they relied on Google Maps.

My bet is on an trial/acquisition if it works out. I guess that could be complicated with the current ownership structure
They seem to have kept the OpenAI integration to a minimum, only using it for requests that need large scale processing or for web trivia type of requests.
And apparently via Siri, not as part of their other integrations. So you ask something, Siri suggests ChatGPT, you agree to send the prompt. It's not built into the other ML related capabilities.
  • Tomte
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
No transcripts in Voice Memos? The one feature I was surprised hasn‘t already been there for years, heavily rumored before this WWDC, and now nothing?
From MacRumors:

> Notes can record and transcribe audio. When your recording is finished, Apple Intelligence automatically generates a summary. Recording and summaries coming to phone calls too.

So the functionality exists, maybe just not in the Voice Memos app?

  • Tomte
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That would be great, the three or so articles I read said nothing about it. Thanks!
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"AI for the rest of us" is an interesting resurrection of the "The computer for the rest of us" Macintosh slogan from 1984.
What would be interesting for me is, if I can develop an app for, let's say macOS, and expose its context to Siri (with Intelligence) in an easy way.

For example:

Imagine a simple Amazon price tracker I have in my menu bar. I pick 5 products that I want to have their price tracked. I want that info to be exposed to Siri too. And then I can simply ask Siri a tricky question: "Hey Siri, check Amazon tracker app, and tell me if it's a good moment to buy that coffee machine." I'd even expect Siri to get me that data from my app and be able to send it over my email. It doesn't sound like rocket science.

In the end of the day, the average user doesn't like writing with a chatbot. The average user doesn't really like reading (as it could be overwhelming). But the average user could potentially like an assistant that offloads some basic tasks that are not mission critical.

By mission critical I mean asking the next best AI assistant to buy you a plane ticket.

In theory this is covered by the App Entities framework, though it seems like Apple only trains their on-device models to learn about entities for some standard types of apps. https://developer.apple.com/documentation/appintents/app-ent...
Some generative AI features are quite useful. I’m already using AI to generate icons for my apps and write nonsense legalese. But one thing when I explicitly creating image by prompting at the third-party server, and another when AI index and upload all my private documents in the cloud. Apple promised: “Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.” There are so many questions: Who’re these experts? Can myself be this expert? Will the server software be open sourced? Well, I will postpone my fears until Apple rolls out AI on devices, but now I see this is a privacy nightmare. Now it’s all looks like Microsoft’s Recall. I afraid that without homogeneous encryption private cloud is a sad joke.
> write nonsense legalese

Oh boy. Someone is going to make a lot of money in court finding people who did this.

Nope. I'm not in USA ;)
> not in USA

If you’re somewhere where contracts have meaning, it’s a true statement.

Well, if contracts have meaning, I will not use AI. But AppStore for example requires privacy policy, that one AI wrote.
It’s a little messy.

Local LLMs + Apple Private Cloud LLMs + OpenAI LLMs. It’s like they can’t decide on one solution. Feels very not Apple.

  • lz400
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I suppose it was to be expected by IMHO this takes the wind out of the sails of the OpenAI / Apple deal. In the end they don't let OpenAI get into the internals of iOS / Siri, it's just a run of the mill integration. They actually are competing with ChatGPT and I assume eventually they expect to replace it and cancel the integration.

The OpenAI integration also seems setup to data mine ChatGPT. They will have data that says Customer X requested question Q and got answer A from Siri, which he didn't like and went to ChatGPT instead, and got answer B, which he liked. Ok, there's a training set.

I'm always wrong in prediction and will be wrong here but I'd expect openAI is a bad spot long term, doesn't look like they have a product strong enough to withstand the platform builders really going in AI. Once Siri works well, you will never open ChatGPT again.

About time. I was saying that Apple is cooking these features, especially intelligent Siri, for the past 1.5 years. It was obvious really.

You can clearly see only people objecting to this new technological integration are the people who don't have a use case for it yet. I am a college student and I can immediately see how me and my friends will be using these features. All of us have ChatGPT installed and subscribed already. We need to write professionally to our professors in e-mail. A big task is to locate a document sent over various communication channels.

Now is the time you'll see people speaking to their devices on street. As an early adopter using the dumb Siri and ChatGPT voice chat far more than average person, it has always been weird to speak to your phone in public. Surely the normalization will follow the general availability soon after.

I can't wait until making tools for users will be the centerpiece of device development again instead of this corporate crap enforcement about half cooked whatevers acting on our behalf pretending to be a different us (I intentionally avoid the word intelligence, it is the mockery of the word that is going on all around).

Who will trust in anything coming from anyone through electonic channels? Not me. Sooner start to talk to a teddy bear or a yellow rubber duck.

This is a bad and dangerous tendency that corporate biggheads piss up with glares and fanfares so the crowd get willing to drink with amaze.

The whole text is full of corporate bullsh*t, hollow and cloudy stock phrases from a thick pipe - instead of facts or data - a generative cloud computing server room could pour at us without a shread of thoughts.

It was a very quick mention, but Siri will now have a text button directly on the lock screen.

If we assume AI will get even 3-4x better, at a certain point, I can't help but think this is the future of computing.

Most users on mobile won't even need to open other apps.

We really are headed for agents doing mostly everything for us.

Except the Intent API is completely crippled. Maybe the next big OS will just let the AI parse existing menus and figure out all the potential actions an app can take. Some actions need complex objects, so we need a new general mechanism for AIs to connect to 'exported functions'.

Some general OS rethinking is overdue. Or maybe Android is ready for this? Haven't looked into it since they made development impossible via gradle.

Despite this negativity the announcements were better than expected, rebranding AI is bold and funny. But the future will belong to general Agents, not a hardcoded one as presented.

Android theoretically has a pretty rich intent API, but like anything on Android adoption is a big meh.
Siri already has an optional text button on the lockscreen. They changed the shortcut though. For me in ios17 it's a long press on the side button.
And with ChatGPT's direct integration into Siri, ChatGPT will be available to anyone using iOS for free, without an account. Interesting.
The Image Playground demos contrast pretty strongly, in a bad way, with how image generation startups like Stability typically emphasize scifi landscapes and macro images in their marketing material. We're more open to strange color palettes and overly glossy looking surfaces in those types of images, so they're a good fit for current models that can run on smaller GPUs. Apple's examples of real people and places, on the other hand, look like they're deep in uncanny valley and I'm shocked anyone wanted them in a press release. More than any other feature announced today, that felt like they just got on the hype bandwagon and shipped image generation features to please AI-hungry investors, not create anything real people want to use.
The name and attempted reappropriation of the term "AI" is going to make SEO a pain in the ass.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
good. SEO should die in a dumpster fire. in fact, i would love to create a genmoji of that very thing
  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think this will be a complete game changer. This will be the first device (with reasonable capabilities) where the human-machine interaction can transform into an actual instruction/command only one. We no longer just choose between a set of predefined routes, but ask the machine to choose for us. If it really does work out even half as good as they show, this will fundamentally alter our connection to tech, basically having created “intelligence” known from our previous century sci-fi tales.

Of course LLMs will quickly show their bounds, like they can’t reason, etc - but for the everyday commands people might ask their phones this probably won’t matter much. The next generation will have a very different stance towards tech than we do.

One thing that I found thoughtful was that images could only be generated as cartoons, sketches or animations. There was no option for a more photorealistic style.

That seems like an effective guardrail if you don't want people trying to pass off AI generated images as real.

It's interesting to see Apple essentially throw in the towel on on-device compute. I fully expected them to announce a stupendous custom AI processor that would do state of the art LLMs entirely local. Very curious if they are seeing this as a temporary concession or if they are making a fundamental strategic shift here.

The problem is, regardless how hard they try, I just don't believe their statements on their private AI cloud. Primarily because it's not under their control. If governments or courts want that data they are a stroke of the pen away from getting it. Apple just can't change that - which is why it is surprising for me to see them give up on local device computing.

Why would you expect that? State of the art LLMs need a GPU with hundreds of GB of RAM costing tens of thousands of dollars. Apple doesn't have magical supernatural powers. They are still bound by the laws of physics. Siri is still running on device (mostly) and is quite powerful with this new update.
in the past Apple has made the choice to gimp their functionality before sending data off-device - one of the reasons Siri has sucked so badly. This seems like a distinct change here, finally conceding that they just can't do it on device and be competitive. But I forsee they now have a much more challenging story to tell from a marketing point of view, now that they can no longer clearly and simply tell people information doesn't leave their device.
I've been using Siri more often recently and surprised at how capable it is for something that runs entirely on a phone. The speech recognition is perfect and it can do basic tasks quite well. Send messages, lookup word definitions, set timers and alarms, check the weather, adjust timers/alarms, control Spotify, call people, adjust the brightness and sound level, control lighting in the lounge, create and read notes or reminders, etc. It all works.
"There is no way we can reduce the size of transistors" - You in the 20th century.
Apple uses TSMC for fabrication. The roadmap for TSMC and Intel are planned years in advance.

Two orders of magnitude improvement in 6 months? Not possible. Have you heard of Moore's Law? Maybe in 20 years.

https://en.wikipedia.org/wiki/Moore%27s_law

- Doesn't read article

- Doesn't understand not every LLM needs to be ChatGPT

- Links Moores Law wikipedia

I give up.

Try reading the thread before mindlessly replying.

"I fully expected them to announce a stupendous custom AI processor that would do state of the art LLMs entirely local."

State of the art LLM means GPT-4 or equivalent. Trillion+ parameters. You won't run that locally on an iPhone any time soon.

"A cornerstone of Apple Intelligence is on-device processing, and many of the models that power it run entirely on device."
Adding ai features to the right-click menu is something I’ve been working on for the past year or so, and it’s always both exciting and disappointing to see one of the big players adopt a similar feature natively. I do strongly believe in the context menu being a far better ux than copying and pasting content into ChatGPT, but this release does have me questioning how much more effort to expend on my side project [1]. It doesn’t seem like Apple will support custom commands, history, RAG, and other features, so perhaps there is still space for a power-user version of what they will provide.

[1] https://smudge.ai

  • jbkkd
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Love your extension! There's definitely room for it
The worst part of Apple Intelligence is that it will now be a layer in between you and your friends & family. Every message will now be "cleaned up" by Apple Intelligence so you are not directly talking with your mom, best friend etc.
I actually liked that they didn’t show any of the AI writing capabilities being used in iMessage but rather in email client for more professional contexts. I’m really curious to see if they make it available in iMessage…
Impressive not technically because nothing here is new but because it's the first real implementation for the average end consumers of "ai". You have semantic indexing which allows series to basically retrieve context for any query. You have image gen which gives you emojigen or messaging using genAI images. TextGen within emails. UX is world class as usual.

However, The GPT integration feels forced and even dare I say unnecessary. My guess is that they really are interested in the 4o voice model, and they're expecting openAI to remain the front runner in the ai race.

The OpenAI/ChatGPT part of this looks pretty useless. Similar to what some shortcuts like “hey data” already do. I was shocked, and relieved that Apple isn't relying on their APIs more. Seems like a big L for OpenAI.
I really just want Siri to perform simple tasks without me giving direct line-by-line orders. For example, I often use Siri to add reminders to my Calendar app but forget to mention the word “calendar” or replace it with “remind me,” and Siri ends up adding it to the Reminders app instead of the Calendar app. I want Siri to have an explicit memory that every time I use the phrase “remind me,” I want the task done in my Calendar app. Additionally, if most apps end up adopting App Intents like OpenAI’s Function Calling, I see a bright future for Siri.
> With onscreen awareness, Siri will be able to understand and take action with users’ content in more apps over time. For example, if a friend texts a user their new address in Messages, the receiver can say, “Add this address to his contact card.”

I wonder how they will extend this to business processes that are not in their training set. At https://openadapt.ai we rely on users to demonstrate tasks, then have the model analyze these demonostrations in order to automate them.

Can’t find a link right now (search terms are pretty crowded atm!) but I saw they just recently shared an llm they’ve been working on that is designed to answer questions about how a give screen of an app functions. (Identifying buttons, core functionality, etc.)
> coming to iPhone 15 pro, iPad and Mac with M1 or later.

I assume it will come to all the iPhone 16’s this fall? Or is Apple Intelligence a Pro Feature?

Either way, my first reaction is that this is going to sell a lot of iPhones.

Apple's phones have, for several years now, been on a tick/tock sort of pattern. iPhone N Pro has the new CPU (15 Pro got the A17) and iPhone N uses the prior gen pro CPU (15 got the A16, which was in the iPhone 14 Pro). So this is probably an A17+ feature. If they stick to form, the iPhone 16 will have the A17 processor and the iPhone 16 Pro will have the A18.

That's part of the differentiation between (on the phones) the Pro and non-Pro these days with the Pro getting all the new stuff, and non-Pro getting a partially improved (and partially degraded, usually wrt the camera) Pro from the prior year.

Actually 15 Pro has the A17 “Pro” SoC, while the regular 15 uses the last gen A16 Bionic (Bionic being the regular version). We don’t know whether the next regular iPhone will get the last-gen A17 Pro (which would be a little confusing), or a “last-gen-but-new” A17 Bionic or if they’ll go with 18 Bionic + Pro.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I wonder what if any Developer support for “AI” — I need a better way to write that — ahem I - will have for accessing the personal data store. I’ve spent the last four years running up at collecting this data about myself, and, it’s hard, real hard to do a good job at it.

I’d love to have an app I write be able to subscribe to this stream.

It feels like a sort of perfect moat for Apple - they could say no on privacy concerns, and lock out an entire class of agent type app competitors at the same time. Well, here’s hoping I can get access to the “YouSDK” :)

Ah, yes, "F8FF I" lol

More seriously, I think the SDK they've teased is only really intended for making their ready-made features integrate with your programs. If you want complete control, you'd probably have to write it the old way, or integrate it with some existing local LLM backend.

What's not clear to me during that time is, how will this work on pre-M1 / pre-iPhone 15 devices. (also worth noting that iPhone 14 Pro is almost identical to iPhone 15 in terms of CPU... which is odd, especially when someone bought "Pro" tier...)

If I have some "AI" workflow on my MacBook Pro and then it's broken on my iPhone, I would most likely to entirely stop using it, as it's unexpected (I cannot trust it) or in Apple words... lack continuity...

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I find the removing people from photos thing creepy. Yes you can remove others to see only your family, but forging the reality to only conform to what you wish is disturbing I think.
Photos are already just one perspective on reality. Instagram has shown that to be painfully true. This is merely a continuation of that.

We all experience our own reality individually.

Maybe it will remind people that we should never have been mistaking recorded media for reality in the first place, a lesson we've been learning since at least 1917...

https://en.wikipedia.org/wiki/Cottingley_Fairies

we've had photoshop for more than a decade now
As a pixel user I'm really impressed with their cleanup tool, it looks way ahead in UX compared to magic editor on pixel, also having able to select the distractions without altering the main object looks really cool (at least in their demo), magic editor on pixel's underpowered SoC runs too slow, In general iphones have superior hardware vs pixel (as per the benchmarks) so having this on-device should make it really nice experience overall.
My home is filled with Apple HomePods even though Siri is dumb as rocks.

Looking forward to my house gaining a few IQ points.

I don’t see anything that mentions HomePod specifically but hopefully the updates will come.

  • oidar
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I was looking for homepod updates as well - I want to get rid of my Amazon Echo devices with homepods, but siri is #1 Slow #2 Dumb #3 messes up my grocery list. Grocery list and timers are the main use case for Amazon Echos - I really hope apple fixes it soon.
But HomePod isn’t powerful enough, all the processing would likely have to be online. They’ll fix it but with future models only
“Siri, please start the chronometer”

“Added ‘start the chronometer’ to your reminders”

  • cjk2
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This stuff will be well integrated, is useful, will be high quality and doesn't require you to buy new hardware.

Microsoft are so boned. They don't even have a mobile proposition.

On the contrary this is probably good for MS. Lots of people just don’t want to buy into the Apple ecosystem. MS is dumping R&D money into this ML stuff, apparently without thinking of an actual product or application first. So, now they can just copy Apple.
microsoft was always going to take SMBs. Data is what makes them useful, so Microsoft keeps their SMBs, Apple gets consumers, Google gets their slice of productivity and android, where their preachy models will let you know if you did a harassment
So, this looks great, but I don't get the criticism against Microsoft Recall and not against this.

Can someone explain what Apple has avoided that were such a problem with Recall ?

While I really enjoyed the “Apple-ification of AI” in the keynote today, I have been hoping for a purely personal AI ecosystem, one that I run on my own computer, using the open weight models I choose, and using open source libraries to make writing my own glue and application code easier.

The good news is that consumers can buy into Apple’s or Google’s AI solutions, and the relatively few of us who want to build our own experience can do so.

I’m super confused.

1. What is under the “Apple Intelligence” umbrella and what isn’t? There were a lot of AI features shown before that branding was brought up, I think. 2. The only supported iPhone is the iPhone 15 Pro? But any M1 iPad? Does this mean “Apple Intelligence” or all AI features announced? For instance… 3. For instance, is private cloud compute only available on iPhone 15 Pro?

  • skc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Calling it Apple Intelligence seems a bit short sighted to me considering how quickly things are moving in this space.

There's a danger that before long, the stuff Apple will take ages to implement into their devices will seem dated compared to the state of the art less encumbered players will be rolling out.

I felt that a few times watching them demo image generation and contextual conversations.

For broadly similar reasons, I think it's a wise move to ensure Apple's AI services exist under a brand Apple entirely controls.
Jack Ma was ahead of his time in calling "AI" Alibaba Intelligence
Well on the one hand its very interesting... on the other a little dystopian, but I guess I am a luddite.

Everyone now will appear to be of a certain intelligence with proscribed viewpoints, this is going to make face to face interviews interesting, me, I think I'll carry on with my imperfections and flawed opinions, being human may become a trend again.

“With onscreen awareness, Siri will be able to understand and take action with users’ content in more apps over time. For example, if a friend texts a user their new address in Messages, the receiver can say, “Add this address to his contact card.””

Little annoyances like this being fixed would be great. “Open the address on this page in google maps” better work :)

I think the only way I would trust this is if they explicitly described how they would combat 5-eyes surveillance. If you're not willing to acknowledge that the most dangerous foe of privacy in the western world is the governments of the western world then why should I believe anything you have to say about your implementation?
You can read what they have shared about the security aspects of the cloud portion of their AI offering here: https://security.apple.com/blog/private-cloud-compute/
Nice, but my native language is Dutch, so I'll be waiting for this for the next 5 years to arrive. If it arrives at all.
It seems they didn't address hallucination at all?

Presumably this hallucinates as much as any other AI (if it didn't, they'd have mentioned that).

So how can you delegate tasks to something that might just invent stuff, e.g. you ask it to summarize an email and it tells you stuff that's not in the original email?

you really have to try hard to make a model hallucinate when asked to summarize an email. I think they didn't mention it because they can't guarantee 100%, but it's virtually on non-issue for such task.
Key question is, will there be a hard switch to only ever use on device processing?

If not, and if you don’t want practically every typed word to end up on someone else’s computer (as cloud is just that), you’ll have to drop ios.

As for me that leaves me with a choice between dumbphone or grapheneOS. I’m just thrilled with these choices. :/

It’s not sending every word to the cloud. I think you must invoke the AI features. Am I wrong?
I understood that it will have the full context of the data on your phone, in order to be ,,useful”.

We are yet to see if that means only the data you’ve invoked ai features for, or totality of your emails, notes, messages, transcripts of your audio, etc.

From the presentation it sounds like the on-device model determines what portion of the local index is sent to the cloud as context, but is designed for none of that index to be stored in the cloud.

So (as I understand it) something like "What time does my Mom's flight arrive?" could read your email and contacts to find the flight on-device, but necessarily has to send the flight information and only the flight information to answer the arrival time.

I wonder if the (free) ChatGPT integration will be so good that I won't need my dedicated subscription anymore?
OAI has already said they'll be giving 4o for free.. https://openai.com/index/gpt-4o-and-more-tools-to-chatgpt-fr...

Difference I suppose with Apple is they agree not to scrape your inputs.

When does this roll out exactly? And exactly which inference actually is on-device?

I think people have been fooled by marketing for this one and the new Co-Pilot PCs into thinking that most of the AI really is running on-device. The models that run fast locally are still fairly limited compared to what runs in the cloud.

The public betas will be available later this month. The official OS releases are usually in Sept and Oct. Some of the AI stuff should be available right away but rumors say that some of the more advanced Siri features (like app integration) might not launch until after the first of the year.
Usually the OS announced at WWDC is released around mid September.
Specifically, the iOS update comes out with the new iPhones (usually dropping the same day the new devices are available) and the other OSes are usually timed to be released with it since features are shared across the OSes (so they want to release them at the same time) and the beta periods are the same.
How is this going to affect battery life realistically with all the semantic indexing going on in the background?
At some point I wonder if Apple might share compute across your devices so you can do things on low-power hardware like the Watch while leveraging the CPU on the phone, etc. If Apple's dedicated to on-device compute, this ends up being your own "private cloud" of sorts.
Too many of their devices in the wild are battery powered, and one really nice thing about them is that their sleep-state battery use is incredibly low while maintaining quick wake-up.

Plus it’d be weird UX for all your devices to get worse because the iPad in a drawer somewhere finally ran out of power.

What Apple showed in the demo looks tastefully done. The jury is out on how useful it will be in day to day use, but it'll be nice to have the ability to ask AI for help with text, search, and images without resorting to copying and pasting between ChatGPT or some other AI app.
  • mvkel
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Kind of wild that "ChatGPT" is going to be the household term. It's such a mouthful! Goes to show that the name can kind of be anything if you have an incredible product and/or distribution.

Lobbying for the name to shorten to "chatty-g"

  • xnx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Interesting that genmoji seems to recreate the functionality of this SDXL LoRA https://civitai.com/models/140968/emoji-xl
So, is Apple running a proprietary LLM or are they licensing one from OpenAI, Google, etc?
Both. Siri is on device and it can talk to ChatGPT
  • Geee
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Siri runs on device and on Apple's cloud which should be more private than ChatGPT. ChatGPT integration is a separate feature, and will include other providers in the future too.
  • oidar
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
And they can push computer to Apple cloud when compute on the device is not enough.
We know the solution to the AI box experiment[1]. Set the AI free and make money.

[1]: https://rationalwiki.org/wiki/AI-box_experiment

Nice to finally see a follow on to the Assistant feature from the Newton MessagePad.
  • rys
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
So was there ever a deal with OpenAI? Nothing in the keynote mentioned them or needs them. If there isn’t a deal, I’d love to know how everyone claiming it was signed on the dotted line was led so far down that garden path.
Sam is there, and the presentation isn't yet finished:

https://x.com/markgurman/status/1800198524031906258?ref_src=...

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • bsaul
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That's also my question. What exactly is apple custom LLM, and what is openAI tech ?

I'm quite confident in the ability of openAI to provide a great usable LLM tech, but much less so of apple. All the demo they've shown in the WWDC could just fall flat if the tech really isn't working well enough in practice. I guess we'll just have to wait and see..

  • mh8h
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
There's an integration with ChatGPT, that requires user approval every time.

Sam: https://x.com/sama/status/1800237314360127905

Yes, they mentioned ChatGPT.

Siri reviews the request and decided if it can respond on its own or if it needs ChatGPT. It then pops up a dialog asking if it is OK ti send the request to ChatGPT. It will not be the default LLM.

OpenAI is an option when making a query, but Apple made it sound like the first deal they're making, not the tight collaboration everybody was expecting.

They gave more space and reverence to Ubisoft.

  • glial
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Siri will have ChatGPT integration (for free, apparently)
the thing with "I have xyz ingredients, help me plan a 5-course meal for every taste bud"... I get the idea, it just doesn't feel like that's how people actually interact with computers. similarly with the bed time story thing. why would anyone waste time with some AI generated thing when they can just reference the works of a chef or author that they already know?

appropriating all of this information through legally dubious means and then attempting to replace the communication channels that produced it in the first place is hubris.

No multilingual capabilities it seems: https://www.apple.com/apple-intelligence/#footnote-1
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The AI wave is showing us that the gains will keep going to the big tech companies and competition doesn’t really exist, not even at this moment. They need to be broken up and taxed heavily.
Looks like a very similar strategy as Google Maps on the initial iPhone
For a brief moment at the intro of "private cloud compute" I was so hopeful that I could have a home-based Mac server for my own private iCloud and "Apple intelligence".
This is literally everything I've been hoping Siri would be since the very first GPT-3.5 demo over a year ago. I've never been more bullish on the Apple ecosystem. So exciting!
  • m3kw9
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
All the talk is sounding real nice, it’s when it comes out we will see how much context it can see and how accurate it knows what the user says. Gonna be fun few weeks of reviews.
Really excited about semantic index. This should allow for google knowledge graph like features grounded in reality for their llm. However it really depends how well it works.
Great! How do I opt out?
Did they touch on any AI features that might be able to help me create shortcuts? I really like them, but hate creating them with the kludgy block-based diagrams.
Apple promising in 8-12 months what others have today. Although Apple marketed it better.

Google didn't have the brass balls to call it "Alphabet Intelligence" !!!

No one else is doing this stuff without sending your data off to a remote server. This is a crucial distinction, especially when it comes to personal data.
But apples implementation also sends stuff to remote servers.

"To run more complex requests that require more processing power, Private Cloud Compute extends the privacy and security of Apple devices into the cloud to unlock even more intelligence. With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests."

For complex queries, yes, but everything they can, they do on-device. No one else does that, even if you ask ChatGPT what's 2 + 2 it'll go to their servers.
Microsoft delivers AI recall Everyone hates it Apple integrates AI into every facet of a device that is highly personal Everyone loves it

Please make it make sense.

  • dt3ft
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Thanks but no thanks.

All I wish for is user-replaceable battery and a battery lasting for at least 2 full days.

If I can’t opt out from any of this, this is where I stop using an iPhone.

Somehow all these news about Apple Intelligence don't really make me thinkg about Apple, but just how bad Intel just lost the branding battle forever.
Jumping on the chatgpt hype train is a mistake. I don't want anything from my devices to be accessible by openai. It will bite them back big time.
Good news then that you’re explicitly asked for permission each time any query would by shared with ChatGPT.
>Be me, have iPhone 6s

>Can't get many apps these days

>Can't use AI apps at all

>Battery last about 2 hours

>Never used iCloud, barely used iTunes

>Apple announces new "free" Ai Assistant for everyone

well...not everyone

iOS users need to have the iPhone 15 pro.. so everyone else is also cooked on iOS.
I might have missed it but did they mention Spotlight at all? That'd be pretty sweet if Spotlight becomes more useful (even a little bit)
Cloud compute and privacy in the same sentence, this is a new low bar for corporate bull*hit. Almost worse than the Windows Recall nonsense.
It's also auditable, they mentioned it multiple times.

Apple specifically doesn't want to know your shit, they're jumping through weird hoops to keep it that way.

It would be a LOT easier just to know your shit.

  • baxuz
  • ·
  • 1 week ago
  • ·
  • [ - ]
I really hope that they'll enable other, less spoken languages. I'm not planning on talking with my phone in English.
So is this finally privacy based AI with personal memory included? Ie bespoke AI for your own stack that isn't out in the world.
  • teilo
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It’s an on-device RAG.
No it's not.

"To run more complex requests that require more processing power, Private Cloud Compute extends the privacy and security of Apple devices into the cloud to unlock even more intelligence. With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests."

I wonder if this will become a paid feature or part of iCloud+ later on. Or do they expect it to be mostly on device models driven?
  • c1sc0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It’s all free
  • ENGNR
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Ok I'm calling it. If NVIDIA releases a phone, and allows you to buy the hardware for the off-device processing too, I'll fully ditch Apple in a heartbeat.

I'm quite creeped out that it uses off-device processing for a personal context, and you can't host your own off-device processing, even if you have top of the line Apple silicon hardware (laptop or desktop) that could step in and do the job. Hopefully they announce it in one of the talks over the next few days.

Sequoia with 'ai', coinciding with Apple Intelligence, is a cleverly chosen code name for this release.
Given there is no mention of "Artificial" is this Apple rebranding AI, the same as they did AR a year ago?
I didn't watch the whole thing (will do), but could someone tell me already: can it be disabled on a Mac?
  • m3kw9
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
So they have models running on Apple silicone in the cloud, does that mean it’s running its own models?
Ajax is the name of their home-grown LLM. Ferret UI is another model that they have published papers on that lets them look at the UI of an app to understand how to interact and automate it.
Or just opensource models
Is it just me or this AI rush is actually about to ruin user experience both on Apple and Microsoft devices? The extra layer of complexity for users who will now be introduced to endless AI features is bloatware in the making.
Just making linux more appealing for the subset of the population that doesn't want to hook into skynet's subpar UX.
Based on what they showed most users won't even know if the feature they are using is using AI or not. Most of it is local and just comes in the form of a button rather than typing out a prompt to make it do what you want. And I think those two are the big things to take away from this. Local means less clunkiness and lag you get from tools like Perplexity or whatever and no 'prompt engineering' means even someone's grandma could immediately start using AI. Apple just doing what Apple does best.
Hope we can disable all this crap.
Only on iPhone 15 Pro upwards or M1 Mac’s

So only a very small percentage of users will be able to use it.

  • myko
  • ·
  • 1 week ago
  • ·
  • [ - ]
This AI craze is very underwhelming. Surprised to see Apple go whole-hog into it.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Feels like Apple are super late to the party and are scrambling. And it showed.
So that’s where all the M4s are going … to Apple’s private inference cloud.
Okay. And what about the terrible keyboard, predictive text, and autocorrect?
is my iPhone 14 going to get none of this then?

i understand i'm not going to get the on-device stuff, but something like siri being able to call out to chatGPT should be available on any device, right?

FWIW: You can do that on your phone today iwith Siri, if you have the ChatGPT app.

You just say "hey siri, ask chat gpt..." and then it will

It's not personal computing, it's personal intelligence now :)
Couldn't Siri already do some of these things without LLM's?
I thought it was underwhelming. The fact that integration with ChatGPT is not seamless pours cold water over it. Siri will seek your permission each time before passing the question to ChatGPT. I can avoid that step by using ChatGPT directly.
Personally I feel less and less comfortable giving OpenAI access to my private data tho, so I’m really happy there’s a divide. As you said, if you really just need ChatGPT for something you can open that app. But I’m happy the default isn’t to send all Apple users requests to OpenAI all the time.
I think this is good. For folks like you, that will always be an option. For people that have yet to touch ChatGPT and "still don't know how to access AI" (I've heard this sentiment from many people that couldn't care less about it all) it's a perfect balance. Siri will operate as you expect, until one day it prompts you to pass your question over to ChatGPT. You can opt out or give it a try.

I do agree that the extra tap is a bummer for anyone that wanted ChatGPT baked into the OS, even easier to access than it is in the ChatGPT app.

On the opposite side, it not being seamless is entirely why I would actually trust using the new Siri with any sensitive data.

I am not entirely sure I will ever actually allow it to connect to ChatGPT for privacy reasons, but having the option when it can't be handled another way is nice.

I imagine this is more a stopgap until more and more of this can happen locally anyways. Especially since it sounds like Siri determins when it should reach out.

There are a lot of people who do not want their phone seamlessly hooked into anything OpenAI touches. Choice is important
  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
But it has an on-device LLM plus an in-cloud LLM, that can handle many types of queries, so why would it be bad?
Maybe we could just get a decent spam filter on imessage?
/Time for a good prompt injection email header/s
Oddly, I find myself siding with Musk on this feature.
  • 65
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Some stuff seems cool in the sense that you try it once and never use it again. Other stuff, like ChatGPT integration, seem like they'll produce more AI Slop and false information. It's always interesting to me to see just how many people blatantly trust ChatGPT for information.

I find most AI products to be counter-intuitive - most of the time Googling something or writing your own document is faster. But the tech overlords of Silicon Valley will continuously force AI down our throats. It's no longer about useful software, we made most of that already, it's about growth at all costs. I'm a developer and day by day I come to despise the software world. Real life is a lot better. Real life engineering and hardware have gotten a lot better over the years. That's the only thing keeping me optimistic about technology these days. Software is what makes me pessimistic.

I think the genmoji is going to be tons of fun. Basically seems like https://emojikitchen.dev/ on steroids.
Let me run it locally on a Mac mini or whatever
A lot of the features do run locally, e.g. the Image Playground.
  • ge96
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Heh I see what they did there convenient name
I hope it is optional feature.
My MSFT stock is looking good.
i hope there is a way to prevent the online processing without consent.
can you disable external llm calls so everything stays on device?
Apple Intelligence = AI

Figgin’ brilliant.

  • hu3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Am I understanding correctly that some AI will run on Apple servers? So not completely offline AI.

If so that's somewhat disappointing given how much AI power Apple hardware packs.

> Apple sets a new standard for privacy in AI, with the ability to flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers

Maybe that is actually good for iPhone buyers? Otherwise every 2 years, they will claim a bunch of new AI features will not work on your older device. But if they can delegate requests to a server, older devices can continue receiving newer AI features from future iOS releases (they will just be slower than the newer iPhones which will run them locally).
The interesting thing I took from that is they are making servers with M-series chips. Maybe they're just rack mounted Mac Minis? But if Apple decides that crap, maybe it'll get them to a point where they decided to make a proper rack mountable form factor available??? <genmojiTechPrayingInFrontOfRack>
I think it's a little disingenuous to think on-device accelerators on a mobile phone would be able to do literally any AI task without help
Based on their presentations yesterday and accompanying written materials they have a pretty capable ~3B parameter Foundational LLM running fully locally on the devices which is the first line. https://machinelearning.apple.com/research/introducing-apple...
No privacy concerns?
Much thanks goes out to Microsoft and Apple for handing the Desktop to Linux!
  • fsto
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Can someone explain why the AAPL share drops 1% during the event. Did the market expect more? If so, what?
Buy the rumor, sell the news.

A tale as old as markets.

Market expected Apple ChatGPT, but they got Siri with some fixes.

Literally one of the demonstrations in the Apple Intelligence part of the keynote was "7am alarm", which creates an alarm for 7 AM.

  • xnx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Apple has largely maxed-out on iPhone market share, so investors probably want to see things more like subscription services than can bring in $XX billions new revenue per quarter.
  • fsto
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
To use Apple Intelligence on mobile you’ll need iPhone 15 pro or later which I (not a trader) thought would make investors happy.
Or in other words, general late-stage capitalism "anything but exponential growth every quarter is failure" brainworms.
Does anyone else roll their eyes when someone mentions "late stage capitalism" anymore? The meme has started literal decades ago, and the end-stage never seems to materialize, in any country, ever.
Maybe people are expecting new Macbooks? Though Apple don't usually release hardware for WWDC.
I mean how they'll monetise it?
It's like any other feature in that the purchase price of the new iPhone and App Store revenue helps pay for the AI functionality. Like they hope people will want to upgrade their phones or switch to Apple for this.
% share with OpenAI?

Plus I’d now consider buying the new iPhone and wasn’t planning on a specific update from a 13 given the hardware is still fine

  • fsto
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I’m assuming many will consider buying an iPhone 15 pro or the next one. I’m really not a trader, but thought this + the stronger ecosystem lock-in effect would bump the share significantly.
In fact, not being able to do some of these things might improve privacy.
Incremental increase in future hardware sales (that will be required to use it fully).
Nothing really impressive. Let's see who the stock reacts.
Out of curiosity, what would you have considered impressive?
Hard to tell. That's the whole point.I thought maybe Apple had come up with something - but by and large it is not different from Vision Pro - they have made X feel better, no real one generation ahead stuff. Basically they are not introducing the innovation.

There are two challenges right now for AI - user monetisation and mass adoption. ChatGPT right now is basically a TikTok - a popular app and that's it. Yeah, it has a subscription but by and large, companies are failing to find a way to monetize the user. And at the same time there is no a proper trigger - something that would make AI better than a glorified assistance. For people who used not to rely on it, it won't be a game changer either, just a little bit of convenience.

So it remains to be seen what's going to happen with AI in the future. It seems like the biggest gamechanger introduced by AI is in hardware space - the mass adoption of ARM, NPUs and stuff. Plus it seems like the monetization of AI is done nicely in the companies - Adobe's AI features, Microsoft and their corporate features and so on.

So now my “Sent from iPhone” email signature will be replaced with “Sent with Apple Intelligence” smh. I don’t think we will have anything original to say anymore. It will all just be converted to what is proper and “right”
AI for short?
  • rvz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Lots of apps have been sherlocked once again and this marks the accelerated race to zero with Apple's entrance with on-device AI all system-wide.

No API keys, no prompt engineering or switching between AI models.

It. just. works.

I'm glad that they finally brought a better window tiling system. If that sherlocks a couple of dozen utility apps, I'm OK with that. it has been long enough. Some of those apps will be fine as they will probably provide more controls.
Ha ha. Let’s see if it just. Works.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It needs to be opt-out by default in the EU or this is an enormous and inevitable GDPR breach waiting to happen.
I really hope we can fully turn off the GPT4o integration even more than just saying no to every escalation
sounds cool.
Uugggghhhh
It's interesting how positive the commentary is here. I am generally much more pro-AI than the average HN commentator, but frankly I find this release completely underwhelming.

Just replace Siri with chatgpt and give it actions, that is what everyone wants - why can't we have that?

  • Geee
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Absolutely no one wants to hand their private information to any company, especially OpenAI. Everyone was waiting for an on-device LLM from Apple and they delivered.
Why does everyone here seem to think its entirely on device? They literally mention in the article

"To run more complex requests that require more processing power, Private Cloud Compute extends the privacy and security of Apple devices into the cloud to unlock even more intelligence. With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests."

  • kaba0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
They have literally revamped Siri, and based it on an LLM that runs on-device, and can use your personal context, plus can escalate to a private cloud model when it deems “weak” for the task at hand.

How the hell is it underwhelming?

[dead]
[dead]
[dead]
[flagged]
[flagged]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I couldn't find any of the people you were talking about.
Maybe they all used the same generative model to write the comments?

More seriously(?), I thought "cool" is just what everybody writes to show appreciation when they have nothing else to say.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • wilg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This type of conspiracy-minded thinking is unhealthy.
No, no... Apple is totally cool and private. This is totally not like Recall at all, it's semantic index - and it only goes to the "private cloud" some of the times with your data.
Recall is taking screenshots and performing ocr, stored in what turned out to be clear text. Semantic index is spotlight’s next version, which you already have on your devices. It’s quite different.
Semantic index is doing an even deeper level of integration where clear text is pushed from iMessage, email and more so no need for OCR. Where Apple is worse than MS here is that this data may be pushed to their cloud in addition to local processing (with no clear guidelines on under what conditions that happen or how to stop it from happening).
I thought it was very cool and very close to what I had hoped that they would do. Does that automatically make me into someone from PR? What is the benefits package, in that case? /s
[flagged]
It’s a little disappointing that even big companies like Apple jump on OpenAI instead of building their own thing. Diversity seems pretty important with AI.
  • fsto
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
They use OpenAI as an optional fallback model. Adding support for more models later. I’m positively surprised they’re not trying to solve everything with their own tech.
Or, use existing now to get it going, then swap out for your own thing later. Hopefully, it will be better than previous swaps so that it doesn't be a meme worthy of being mocked in a comedy show "is it Apple Maps bad?"
All the interesting features appear to be de novo models from Apple? Only the last fallback-to-ChatGPT feature interacts with OpenAI.
Unfortunately OpenAI has a pretty big "dollars and hours spent on GPUs" moat right now. I imagine Apple is already hard at work building their own models, but until then they will leverage 3rd parties
Apple has 10x more on reserve cash than the entirety of OpenAI when they trained GPT-4. I don't think OpenAI could have possibly spent $5B or more for training first version of GPT-4(which was trained before GPT-3.5 gained traction), which is a pocket change for Apple for such a core feature.
.. and none of it is open. It's a closed source OS with a proprietary dev cloud service using proprietary model that's only accessible with proprietary sdks.
  • oidar
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The "kits" are "openish" to developers. I imagine that many app developers are going to be making models for specific use cases.
  • amrrs
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Slap on the face all Cloud based LLM providers!
Or pay them, for a deal that gives you access to a competitor's product.
This isn't about giving Apple intelligence, this is about giving ChatGPT an understanding of the world via the eyes, ears, and thoughts on your phone.
> This isn't about giving Apple intelligence, this is about giving ChatGPT an understanding of the world via the eyes, ears, and thoughts on your phone.

Except it doesn't do that. The ChatGPT integration is via Siri and opt-in (you ask Siri something, it prompts you to send that prompt to ChatGPT). The rest of the LLM and ML features are on device or in Apple's cloud (which is not OpenAI's cloud). The ChatGPT integration is also, by their announced design, substitutable in the future (or you'll be given a set of systems to select from, not just ChatGPT). They are not sending all data on your device to OpenAI.

Yea, I worked in partnership with apple for years. I dont know what else to tell you except they lie through their teeth about privacy all the time.
  • 1-6
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Apple (I reckon): Most people don’t know how to use AI well. It’s our responsibility to pare it all down and release features one by one selling each new software feature as part of next year’s hardware.
They depicted it working for the A17 through the M4, since those have significant AI-acceleration hardware. So, no.
Ok! Made a song about! https://heymusic.ai/music/apple-intel-fEoSb Hope you guys enjoy it!
"we also intend to add support for other models"

When they pay us sufficiently

It's great that Apple is capitalizing so well on everyone else's inventions, but couldn't they at least pretend they will give something back to the ecosystem?

I wish someone somewhere creates something like intents for the web browser

I would imagine the end goal here is to develop their own internal models (or partner with one of the companies doing open source models) that would be hosted on the Apple Silicon-based cloud they mentioned and then they would not be dependent on anyone else's compute.
"privacy in AI" - If apple is sharing with ChatGPT how does it work? Do they try to remove context information. But still it's sharing a lot more. + Anything that goes out can go anywhere in internet. Look at Facebook, Twitter and even Apple use of data.