“Unified” only means there’s not a discrete block for the CPU and a discrete block for the GPU to use. But it’s still RAM- specifically, LPDDR4x (for M1), LPDDR5 (for M2), or LPDDR5X (for M3).
Besides, low-end PCs with integrated graphics have been using unified memory for decades- no one ever said “They don’t have RAM, they have UM!”
Yes, that’s true, but it’s still an indicator of an uninformed reporter.
Apple Silicon chips pass data from one dedicated cores directly to another without the need of passing through memory, hence the smaller processor cache. There are between 18 and 58 cores in the M3 (model dependent). The architecture works very differently than the conventional CPU/GPU/RAM model.
I can run FCP and Logic Pro and have memory to spare with 16GB of UM. The only thing that pushes me into swap is Chrome. lol
Maybe you’re not familiar with the apps I’m referring to. Final Cut Pro and Logic Pro are professional video and audio workstations.
If I tried to master an export from Adobe Premiere Pro in Protools on PC I’d need 32GB of RAM to to prevent stutter. I only use ~12GB of 16GB doing the same on Apple Silicon.
8GB of UM is not for someone running two pro apps at once. It’s for grandma to use for online banking and check her email and Facebook.
it’s still an indicator of an uninformed reporter.
My dude, you’re literally in here arguing that because Apple has a blob for both CPU memory and GPU memory that somehow makes that blob “not RAM.” Apple’s design might give fantastic performance, but that’s irrelevant to the fact that the memory on the chip is RAM of known and established standards.
Each power intensive process is given its own dedicated core. The OS is designed specifically to send dedicated processes to the associated core. For example, your CPU isn’t bogged down decrypting data while loading an application.
You can’t compare it to anything else out at this time. Just learn about it, or don’t. Guessing is just a waste of time.
It’s different. The GPU is broken into several parts and integrated into the SoC along with the CPU’s dedicated processes. Data is passed within the SoC without entering UM. It’s exclusively used as a storage liaison.
You should check out Apple Silicon M-Series. Specs don’t translate to performance in the way conventional PC architecture does. I guarantee you’ll see PC manufacturers going to 2nm SoC configurations soon enough. The performance is undeniable.
Those are only two of the 18-52 cores (model dependent) of Apple M chips. The OS is designed around this for maximum efficiency. Most Macs don’t even have a fan anymore.
And it’s not RAM, it’s UM for an SoC. The usage of memory changed with the introduction of Apple Silicon.
“Unified” only means there’s not a discrete block for the CPU and a discrete block for the GPU to use. But it’s still RAM- specifically, LPDDR4x (for M1), LPDDR5 (for M2), or LPDDR5X (for M3).
Besides, low-end PCs with integrated graphics have been using unified memory for decades- no one ever said “They don’t have RAM, they have UM!”
Yes, that’s true, but it’s still an indicator of an uninformed reporter.
Apple Silicon chips pass data from one dedicated cores directly to another without the need of passing through memory, hence the smaller processor cache. There are between 18 and 58 cores in the M3 (model dependent). The architecture works very differently than the conventional CPU/GPU/RAM model.
I can run FCP and Logic Pro and have memory to spare with 16GB of UM. The only thing that pushes me into swap is Chrome. lol
It’s a pointless distinction.
And in this case, it makes 8gig look even worse.
Maybe you’re not familiar with the apps I’m referring to. Final Cut Pro and Logic Pro are professional video and audio workstations.
If I tried to master an export from Adobe Premiere Pro in Protools on PC I’d need 32GB of RAM to to prevent stutter. I only use ~12GB of 16GB doing the same on Apple Silicon.
8GB of UM is not for someone running two pro apps at once. It’s for grandma to use for online banking and check her email and Facebook.
My dude, you’re literally in here arguing that because Apple has a blob for both CPU memory and GPU memory that somehow makes that blob “not RAM.” Apple’s design might give fantastic performance, but that’s irrelevant to the fact that the memory on the chip is RAM of known and established standards.
Read my other replies to this comment. There’s no GPU. It’s an SoC.
BCM2835 is SoC too. And RK3328. And Mali-450 is GPU.
https://www.apple.com/newsroom/2023/10/apple-unveils-m3-m3-pro-and-m3-max-the-most-advanced-chips-for-a-personal-computer/
Each power intensive process is given its own dedicated core. The OS is designed specifically to send dedicated processes to the associated core. For example, your CPU isn’t bogged down decrypting data while loading an application.
You can’t compare it to anything else out at this time. Just learn about it, or don’t. Guessing is just a waste of time.
https://docs.kernel.org/scheduler/sched-capacity.html
Basic priority-based scheduling.
Sent to one of two processors on a PC, or 18-52 dedicated cores in an M chip.
Like has been done on laptops with on-board video cards since, well, forever?
It’s different. The GPU is broken into several parts and integrated into the SoC along with the CPU’s dedicated processes. Data is passed within the SoC without entering UM. It’s exclusively used as a storage liaison.
You should check out Apple Silicon M-Series. Specs don’t translate to performance in the way conventional PC architecture does. I guarantee you’ll see PC manufacturers going to 2nm SoC configurations soon enough. The performance is undeniable.
Soooo Integrated Graphics?
Negative.
https://www.apple.com/newsroom/2023/10/apple-unveils-m3-m3-pro-and-m3-max-the-most-advanced-chips-for-a-personal-computer/
So it’s not on same chip with CPU?
A CPU performs integer math.
A GPU performs floating-point math.
Those are only two of the 18-52 cores (model dependent) of Apple M chips. The OS is designed around this for maximum efficiency. Most Macs don’t even have a fan anymore.
There. Is. No. Comparison. In. PC.
A GPU performs integer math.
A CPU performs floating point math.
All four statements are true.
That’s correct. My mistake.