Solaris 11: understanding high values in the kernel statistics

what can cause these kernel statistics (as reported by top) to be so high?

Kernel: 152661 ctxsw, 2475 trap, 99065 intr, 1449 syscall, 3 fork, 2373 flt

Usually, my system has much lower values, e.g.

Kernel: 487 ctxsw, 3 trap, 904 intr, 435 syscall, 3 flt

but every now and then the numbers go up and the OS freezes. The load is always <1.

Thank you!

Edit:

$ vmstat
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr s0 s1 s3 s4   in   sy   cs us sy id
 1 0 0 2806128 2818224 43 207 0 0  0  0  5  4  4  0 10 9954  510 3740  0  2 98

$ prstat
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP       
   658 root        0K    0K sleep   60    -   2:15:27 0.5% nfsd_kproc/157
   245 root        0K    0K sleep   99  -20   0:16:08 0.2% zpool-volume/166
   577 root        0K    0K sleep   60    -   0:00:09 0.0% lockd_kproc/24
  8195 root       11M 4788K cpu0    49    0   0:00:00 0.0% prstat/1
   617 root       53M   36M cpu3    59    0   0:00:34 0.0% fmd/29
   117 root     2144K 1288K sleep   59    0   0:00:00 0.0% pfexecd/3
   136 root       13M 4824K sleep   59    0   0:00:00 0.0% syseventd/19
    46 root       17M 8260K sleep   59    0   0:00:01 0.0% dlmgmtd/20
    42 netcfg   3892K 2900K sleep   59    0   0:00:00 0.0% netcfgd/4
    94 daemon     14M 4824K sleep   59    0   0:00:00 0.0% kcfd/3
   614 daemon     12M 2068K sleep   59    0   0:00:00 0.0% nfsmapid/3
   708 hpsmh      24M 6256K sleep   59    0   0:00:00 0.0% hpsmhd/1
    13 root       19M   18M sleep   59    0   0:00:14 0.0% svc.configd/18
    11 root       24M   14M sleep   59    0   0:00:04 0.0% svc.startd/16
    71 netadm   4272K 2908K sleep   59    0   0:00:00 0.0% ipmgmtd/5
Total: 78 processes, 930 lwps, load averages: 0.40, 0.44, 0.46

Edit 2: stats just before the crash:

$ vmstat
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr s0 s1 s3 s4   in   sy   cs us sy id
 1 0 0 2368992 2330108 41 216 0 0  0  0 130 4  4  0 64 39092 486 23076 0  7 93

$ prstat
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP       
   453 root        0K    0K sleep   99  -20   0:05:09 0.5% zpool-volume/166
   581 root        0K    0K sleep   60    -   0:20:36 0.4% nfsd_kproc/128
  1819 root       11M 6036K sleep   49    0   0:00:01 0.0% bash/1
   548 root        0K    0K sleep   60    -   0:00:16 0.0% lockd_kproc/12
     5 root        0K    0K sleep   99  -20   0:00:11 0.0% zpool-rpool/166
  1818 root       18M 5392K sleep   59    0   0:00:00 0.0% sshd/1
   555 root       58M   42M sleep   59    0   0:00:25 0.0% fmd/29
  3528 root       11M 5092K cpu5    59    0   0:00:00 0.0% prstat/1
     6 root        0K    0K sleep   99  -20   0:00:15 0.0% kmem_task/1
   501 root     9760K 1436K sleep   59    0   0:00:00 0.0% automountd/4
   499 root     9668K 1360K sleep   59    0   0:00:00 0.0% automountd/2
   488 root       14M 3896K sleep   59    0   0:00:00 0.0% inetd/4
   479 root     2780K 1488K sleep   59    0   0:00:00 0.0% hotplugd/3
   487 root     8928K 1164K sleep   59    0   0:00:00 0.0% cron/1
  1817 root       16M 3656K sleep   59    0   0:00:00 0.0% sshd/1
   468 daemon   7268K 4648K sleep   59    0   0:00:00 0.0% statd/1
   415 daemon   3508K 1440K sleep   59    0   0:00:02 0.0% rpcbind/1

This looks like a hardware problem with the disk, disk controller, SCSI/SAS cables or a software problem in ZFS.

You should open a case with Oracle.

If the server is completely frozen it is possible to generate a memory dump from OpenBoot.