Techbird : Your Gateway to the Future of Innovation

A Realistic Look at Apple Intelligence… So Far

November 26, 2024 | by ranazsohail@gmail.com

maxresdefault

Let’s talk about Apple Intelligence. Apple made a big promise about a game-changing update that would impact their entire lineup, but that promise is starting to lose some of its shine. It was first announced at WWDC 2024, nearly six months ago, and it’s still not fully available. The new iPhone came out in September without any Apple Intelligence features, but software updates have started rolling out, and the full rollout is expected by March 2025. That’s a long time to wait.

So, let’s take a look at the Apple Intelligence features that are available right now and whether the promise Apple made is really worth the hype.

First up: the writing tools. These features are available on iPhone, iPad, and Mac—though interestingly, there’s nothing on the Vision Pro, their most cutting-edge platform. That’s kind of surprising. As for the writing tools, you’ve probably heard about them because they’ve been mentioned a lot. Basically, they use generative AI to help refine and improve what you’ve already written.

This is easiest to demonstrate on a Mac, but here’s how it works: you write something, highlight it, and then pick one of the writing tools. The interface, though, is a bit odd—it feels repetitive. You can click on any of the writing tools directly, or you can click “show writing tools,” which just gives you the same list again.

So, let’s say you’ve written a message and want to make it sound friendlier, or maybe you’ve typed up something for work that needs to be more professional. That’s where the AI comes in. I’ve tried it on everything from full video scripts to just quick messages, and it does the job. The “friendly” mode uses shorter sentences and adds a lot of exclamation points, while the “professional” mode does the opposite—it’s more formal and avoids exclamation points. I find the “concise” mode the most useful—it takes your text and shortens it by about 40%, making it more to the point.

Once it’s done, you can either copy the revised text or replace the original with it if you’re comfortable. The process takes just a few seconds, and it works offline since everything is processed on-device. The proofreading feature is helpful too—it catches things like capitalization errors and proper nouns, which can be a nice bonus alongside the auto-punctuation checker. The “summary” and “make table” features seem like they’d be really useful for large documents, but unfortunately, they don’t always work with my longer ones that could really use a summary or a table.

Even though I’m a professional writer now, I haven’t found writing tools all that useful.

Take notification summaries, for example. Personally, I’ve found this feature pretty much useless. As it sounds, it condenses your notifications into short summaries. So if you get a long message or multiple notifications from the same app, it tries to combine them into a single update so you can get the main idea quickly. Whether it’s a simple text or a flood of messages in a group chat, it condenses everything into a line or two.

But after trying it out, I’ve realized that the notifications I get rarely need summarizing. I’ve never had four text messages that made sense being squished into one. It might work for some people, but it’s not just me—I’ve seen plenty of memes and examples online of Apple’s notification summaries doing some pretty funny (and sometimes brutal) things. So, while it can be amusing, I ended up turning the feature off.

Next up is Gen Moji. Believe it or not, there was some buzz around this one. It’s exactly what it sounds like: an AI tool that lets you create any emoji you can imagine. So, let’s say you’re chatting in a messaging app and you need a specific emoji that doesn’t exist. All you have to do is describe it, hit “create a new emoji,” and within a few seconds, the AI generates it for you. It’s a pretty fun concept.

That said, the AI does have its limits. If you ask for something too crazy or graphic, it might refuse. But it has no issue creating something like a potato holding a gun, so you can totally send that as a reaction if you want. It’s a super niche tool, and while it might be fun for some people, it’s not something I’d use all the time. Still, it’s cool that it’s out there.

Then there’s Image Playground, which is kind of similar but its own thing. This one’s a bit odd—it’s a tool that lets you generate cartoon-style images based on any description you give. You just type in what you want, and the AI creates it for you. There are also some inspiration buttons to help guide you before you start typing, suggesting things or characters you might want to include in your image.

You can pick anyone from your photos that your phone recognizes, and it’ll use that as the starting point to create an image. From there, you can add all sorts of things—like accessories, backgrounds, props, or whatever you want. For example, here’s me with a hard hat at a disco. The app generates it in seconds, always in a cartoon style, not photorealistic, and gives you a few options to pick from. Once you’re happy with it, you can easily copy and paste the image wherever you want.

Google and a few others have similar features on their devices too, kind of like a fun tool to create AI-generated images. It’s not super practical, but I get why they don’t do photorealistic images—that could open up a whole bunch of problems. Plus, it’s meant to avoid generating things that could be offensive, like guns. However, I’ve noticed if you throw in some extra details, like a chef hat or fireworks, it might let you get away with it. Since it’s still in beta, you can expect some weird results.

Honestly, though, this is just another app on my phone that I’ll probably never use again after making this video.

Priority Notifications:

This is one feature that actually seems like it could be useful, especially for those of us who deal with constant notifications. As both an iPhone and Android user, I’ve noticed that Apple has always struggled with notification management. For a long time, it felt like an endless flood of alerts coming from every direction.

With priority notifications, Apple is trying to address this by highlighting the most important ones when you’re in Focus mode, which limits distractions. It could also work well in the Mail app, using AI to push high-priority messages to the top, along with other key categories. Personally, I don’t use the default Mail app, but a lot of people do, so this could really help. It’s something Gmail has had for years. And the reduced notifications mode is supposed to let through only the most important alerts, keeping everything else quiet.

The Photos App:

Okay, let’s be real – the new Photos app? It’s awful. Everyone I know hates it. It’s just not good. But here’s the thing: Apple has added one AI feature to it, and it’s one that many other companies have been using for a while now: background object removal. It’s a nice addition, but given how many AI tools other companies have packed into their photo apps, it’s definitely not a standout move from Apple.

It’s basically the same as Google’s Magic Eraser tool. Imagine you have a photo with a subject in the front and an annoying object in the background that you want to get rid of. You open the edit button, tap the cleanup tool on the right side, and often, it will automatically highlight the object with a rainbow glow to show you what it’s detected. If it doesn’t do this automatically, you can just tap or circle the area you want to remove, and in a few seconds, it’s gone, with the background filled in seamlessly using generative fill.

Of course, some scenes work better than others. It’s pretty much the same as with Magic Eraser or Photoshop—backgrounds with patterns or repetition tend to give the best results. Honestly, I think this version works better than Google’s. It’s better at quickly outlining the object you want to remove with just one gesture, which is really impressive.

Right now, the only way to start this process is by going through the Notes app, creating a new note, and then attaching a new recording to capture your audio. It’s a little clunky—could be easier.

Now, let’s talk about visual intelligence. This feature is exclusive to the iPhone 16 and 16 Pro models. To use it, you long-press the button on the side of the phone, and it opens up the camera controls. You’ll see a full-screen viewfinder with three buttons: the shutter in the middle, “ask” on the left, and “search” on the right.

So, if you point this at any object or subject and tap “ask,” you can ask ChatGPT anything about what it sees. If you hit “search,” it’s like doing a reverse Google image search to identify what’s in front of you—so, you get the choice between ChatGPT or Google. If you press the shutter button, it takes a photo and then lets you either search again or ask another question through a pop-up menu. It’s a bit repetitive, honestly, but it works.

The interface looks nice and is pretty fast, but honestly, it’s nothing new. I was just watching a Galaxy S8 review from 2017, and it had the same feature. I’d take a picture of something, Bixby AI would look it up, and tell me if it was a landmark or something I could shop for. This version is definitely more powerful—GPT has a lot more going for it and is probably more trustworthy than Bixby was back then, plus it’s better integrated. But at the end of the day, it’s not a huge leap forward. It’s more accurate and capable, but nothing groundbreaking. Maybe future updates will make it more impressive.

Then there’s the ChatGPT integration, which is really the main new thing here—it’s like a “Siri, but not as good with ChatGPT” situation. I think a lot of people expected more from Siri when they saw the new animation, thinking it was a big upgrade, but the actual Siri updates are mostly still missing. The ChatGPT stuff is what’s really here, though.

If you ever ask Siri something that’s a bit beyond its usual scope—like requesting a recipe, planning a trip, or something more complex—it will recognize that and offer to let you switch to ChatGPT. If you agree, Siri will pull in a response from ChatGPT instead. This feature is completely free and doesn’t require an account. Apple has made sure your data isn’t used for model training, and OpenAI doesn’t gather any of your information from these requests.

You can also sign into your ChatGPT account if you want to keep a history of your questions, access more advanced models, or go beyond the daily free query limit. Plus, there’s now a button built into iOS that lets you easily upgrade to ChatGPT Plus.

So, while the new Siri animation might seem impressive to some, there’s not a lot that’s truly new with Siri yet, aside from the ability to type to it more easily. But there are some exciting updates on the way, including features that will let Siri take actions within apps. This could really open up new possibilities for Siri, and it’s definitely something to look forward to.

I think developers could really update their apps to make them work well with AI, and that could make a huge difference. It could even help Siri stand out and become useful again, though that’s still in progress.

That said, the rest of the AI stuff doesn’t really excite me all that much. Honestly, I just don’t find it that compelling. Like the writing tools—they seem like a safe, basic use of generative AI. It’s what they’re talking about the most, but I rarely need to rewrite things. And like I said, I’m not the kind of person who’s using image-generation tools all the time either.

The visual intelligence is useful, sure, but honestly, only a few times a year—like when I see an interesting plant or dog and want to know what it is. But here’s the funny part: remember that clip from the Apple keynote? The one where a person walks up to a dog, and instead of asking the owner what breed it is, they ask, “Hey, can I take a picture of your dog?” and then ask their phone what kind of dog it is? That moment is the perfect example of how some of these Apple intelligence features actually play out in real life.

You could definitely use AI tools, and they might work really well, but the real question is: should you? Sure, you can ask for a more friendly version of a letter to send, or you could just try to make it friendly from the start. You could also have ChatGPT plan out a whole trip for you, with an itinerary and everything, and it’s actually pretty impressive when it works. But are you really going to just follow an AI-generated plan without changing anything? And that’s if you even enjoy packing… Because there are definitely two types of people when it comes to that. Maybe some will go along with the AI’s suggestions, but I think I’m the type who wants a little more control.

Luckily, no matter what I’m packing, it always fits perfectly in my Ridge gear. Big thanks to Ridge for sponsoring this part of the video! I helped design their new commuter bag, and I’ve been using it every day for the last couple of months. It’s got these great water bottle holders on the sides, and I even added a sleeve in the back for a tablet. So, if you’ve got both a tablet and a laptop, each one has its own padded spot.

And when I’m flying, I also use the Ridge carry-on. It’s honestly the best. It’s weatherproof, super sturdy, and has these cool aluminum corner protectors. The 360-degree spinning wheels? Smooth as can be. It even has a spot for a tracker in case it gets lost. It’s so lightweight, and it’s the perfect travel companion. The best part is, the Ridge gear works together so well, it’s like they’re made for each other—an ideal travel setup.

A huge thanks to Ridge for sponsoring this part of the video and for letting me design some of their products, with more coming soon—so stay tuned! If you’re interested in picking up both the carry-on and the backpack, you can grab them at Ridge.com/MKBHD and get 30% off. They also have a Black Friday sale going on right now.

As for Apple, I think they’re going to keep improving and iterating on these features. We now have a solid baseline to expect from them, and this is the worst it’ll ever be—it’s only going to get better. That said, it’s crazy how much Apple is hyping their new AI features in their ads and on apple.com, as if “Apple Intelligence” is this game-changing breakthrough. Don’t get me wrong, there are some solid features, like the background eraser in Photos, which I honestly think is better than Google’s. But there’s still a lot of potential to unlock.

The best thing Apple’s done with AI, though, is boosting the on-device performance. With more RAM in every iPhone and the base Mac now coming with 16GB of memory, the entry-level Mac mini is a crazy good deal.

So, props to Apple for that! Anyway, thanks for watching, and I’ll catch you in the next one. Peace!

RELATED POSTS

View all

view all