Open Terminal, enter:

sudo nvram boot-args="maxmem=8192"

and reboot. This will limit the RAM to 8 GiB. Now start using your Mac with the usual workload.

To reenable the full 16 GiB-RAM simply enter sudo nvram -d boot-args and reboot again.


Your dd-command won't work as intended, because the number of blocks written is 0 (count=0) and the block size would be 1 Byte (bs=1). As far as I can tell only a "file" with the size of 7 GiB is created in the file system catalog but no data is written to the file itself at all. If the count would be 1 (count=1), 1 Byte of random data would be appended to the file temp_7gb (seek=7g).

The destination (of=temp_7gb) is dubious. It creates a file in the working directory. You either have to cd to a file system on the RAM disk (e.g. cd /Volumes/RAM-Disk/) first to create the file there or write directly to the RAM-disk device (of=/dev/devX).

dd is a tool which rather measures disk I/O than CPU load/speed or memory usage/pressure.

With a clever combination of dd operands you still can use it to simulate CPU load/memory usage.

  1. if=/dev/urandom or if=/dev/zero are related with the CPU speed
  2. of=/dev/null the disk won't be involved.
  3. bs=x determines the memory usage (x is almost proportional to memory usage)
  4. count=y gives you time to test things

Examples:

dd if=/dev/urandom of=/dev/null bs=1 count=1000 

mainly measures the system-call overhead (including any Spectre / Meltdown mitigations your kernel uses, which make system calls slower than they used to be). Cryptographically-strong random numbers also takes significant computation, but 1 system call per byte will dominate that. The memory footprint is low (on my system about 400 kB)

dd if=/dev/urandom of=/dev/null bs=1g count=10

mainly measures the CPU speed because it has to compute a lot of random data. The memory footprint is high (on my system about 1 GB). bs=1m would be about the same but use much less memory.

dd if=/dev/zero of=/dev/null bs=1g count=10

mainly measures the memory bandwidth (here ~7 GB/s) for the kernel's /dev/zero driver doing a memset in kernel space into dd's buffer. The memory footprint ~= buffer size, which is much larger than any caches. (Some systems with Iris Pro graphics will have 128MiB or 256MiB of eDRAM; testing with bs=128m vs. bs=512m should show that difference.)

The kernel's /dev/null driver probably discards the data without even reading it so you're just measuring memory write bandwidth, not alternating write + read. (And system-call overhead should be negligible with only a read+write per 1GiB stored.)

dd if=/dev/zero of=/dev/null bs=32k count=100000

mainly measures the CPU Cache-write bandwidth (here ~13 GB/s) and system-call overhead. The CPU has not much to compute (zeros!); the memory footprint is low (on my system about 470 kB).

L1d cache size is 32kiB. You'd think bs=24k would be faster (because it fits easily in L1d instead of having more evictions because dd's buffer isn't the only thing in L1d), but increased system-call overhead per kB copied might make it worse.

L2 cache is 256kiB, L3 is 3 to 8 MiB. bs=224k should see pretty good bandwidth. You can run dd on each core in parallel and bandwidth will scale because L2 caches are per-core private, unlike shared L3 and DRAM. (On many-core Xeon systems, it takes multiple cores to saturate the available DRAM bandwidth, but on a desktop/laptop one core can come pretty close.)