Revert android changes to java.util.Arrays.deepEquals0.

Change 669c2cc344614fd721cfff2e7a9e80ad6e8b368c introduced this
change, and a similar change to deepHashCode, as an optimization, but
it introduced a subtle bug in deepEquals, and actually made it slower
according to benchmarks.

The bug is demonstrated by
  Arrays.deepEquals(
    new Object[] { new Object[] { "Hello", "world" } },
    new Object[] { new String[] { "Hello", "world" } })
which should be true, according to the documentation, but is false
with the current implementation (because cl1 != cl2 for the inner array).

A test for this case is included in this change.

The performance was benchmarked using
  vogar --benchmark libcore/benchmarks/src/benchmarks/DeepArrayOpsBenchmark.java

With a user-debug build of current AOSP (with the android changes) on a marlin, we get these results:

    Trial Report (1 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=0001}}
      Results:
        runtime(ns): min=31063.10, 1st qu.=31063.10, median=31063.10, mean=31063.10, 3rd qu.=31063.10, max=31063.10
    Trial Report (2 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=0004}}
      Results:
        runtime(ns): min=30677.09, 1st qu.=30677.09, median=30677.09, mean=30677.09, 3rd qu.=30677.09, max=30677.09
    Trial Report (3 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=0016}}
      Results:
        runtime(ns): min=62130.26, 1st qu.=62130.26, median=62130.26, mean=62130.26, 3rd qu.=62130.26, max=62130.26
    Trial Report (4 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=0256}}
      Results:
        runtime(ns): min=436376.91, 1st qu.=436376.91, median=436376.91, mean=436376.91, 3rd qu.=436376.91, max=436376.91
    Trial Report (5 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=2048}}
      Messages:
        INFO: This experiment does not require a microbenchmark. The granularity of the timer (1.212us) is less than 0.1% of the measured runtime. If all experiments for this benchmark have runtimes greater than 1.212ms, consider the macrobenchmark instrument.
      Results:
        runtime(ns): min=8108656.76, 1st qu.=8108656.76, median=8108656.76, mean=8108656.76, 3rd qu.=8108656.76, max=8108656.76
    Trial Report (6 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=0001}}
      Results:
        runtime(ns): min=11116.16, 1st qu.=11116.16, median=11116.16, mean=11116.16, 3rd qu.=11116.16, max=11116.16
    Trial Report (7 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=0004}}
      Results:
        runtime(ns): min=12169.95, 1st qu.=12169.95, median=12169.95, mean=12169.95, 3rd qu.=12169.95, max=12169.95
    Trial Report (8 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=0016}}
      Results:
        runtime(ns): min=22464.17, 1st qu.=22464.17, median=22464.17, mean=22464.17, 3rd qu.=22464.17, max=22464.17
    Trial Report (9 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=0256}}
      Results:
        runtime(ns): min=181320.84, 1st qu.=181320.84, median=181320.84, mean=181320.84, 3rd qu.=181320.84, max=181320.84
    Trial Report (10 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=2048}}
      Messages:
        INFO: This experiment does not require a microbenchmark. The granularity of the timer (1.212us) is less than 0.1% of the measured runtime. If all experiments for this benchmark have runtimes greater than 1.212ms, consider the macrobenchmark instrument.
      Results:
        runtime(ns): min=4208129.54, 1st qu.=4208129.54, median=4208129.54, mean=4208129.54, 3rd qu.=4208129.54, max=4208129.54

With the change above reverted, we get:

    Trial Report (1 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=0001}}
      Results:
        runtime(ns): min=30214.49, 1st qu.=30214.49, median=30214.49, mean=30214.49, 3rd qu.=30214.49, max=30214.49
    Trial Report (2 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=0004}}
      Results:
        runtime(ns): min=30545.04, 1st qu.=30545.04, median=30545.04, mean=30545.04, 3rd qu.=30545.04, max=30545.04
    Trial Report (3 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=0016}}
      Results:
        runtime(ns): min=61349.10, 1st qu.=61349.10, median=61349.10, mean=61349.10, 3rd qu.=61349.10, max=61349.10
    Trial Report (4 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=0256}}
      Results:
        runtime(ns): min=426826.51, 1st qu.=426826.51, median=426826.51, mean=426826.51, 3rd qu.=426826.51, max=426826.51
    Trial Report (5 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepEquals, vm=default, parameters={arrayLength=2048}}
      Messages:
        INFO: This experiment does not require a microbenchmark. The granularity of the timer (1.206us) is less than 0.1% of the measured runtime. If all experiments for this benchmark have runtimes greater than 1.206ms, consider the macrobenchmark instrument.
      Results:
        runtime(ns): min=7472845.96, 1st qu.=7472845.96, median=7472845.96, mean=7472845.96, 3rd qu.=7472845.96, max=7472845.96
    Trial Report (6 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=0001}}
      Results:
        runtime(ns): min=12490.86, 1st qu.=12490.86, median=12490.86, mean=12490.86, 3rd qu.=12490.86, max=12490.86
    Trial Report (7 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=0004}}
      Results:
        runtime(ns): min=13076.82, 1st qu.=13076.82, median=13076.82, mean=13076.82, 3rd qu.=13076.82, max=13076.82
    Trial Report (8 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=0016}}
      Results:
        runtime(ns): min=26644.46, 1st qu.=26644.46, median=26644.46, mean=26644.46, 3rd qu.=26644.46, max=26644.46
    Trial Report (9 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=0256}}
      Results:
        runtime(ns): min=195284.92, 1st qu.=195284.92, median=195284.92, mean=195284.92, 3rd qu.=195284.92, max=195284.92
    Trial Report (10 of 10):
      Experiment {instrument=runtime, benchmarkMethod=deepHashCode, vm=default, parameters={arrayLength=2048}}
      Messages:
        INFO: This experiment does not require a microbenchmark. The granularity of the timer (1.206us) is less than 0.1% of the measured runtime. If all experiments for this benchmark have runtimes greater than 1.206ms, consider the macrobenchmark instrument.
      Results:
        runtime(ns): min=4918176.95, 1st qu.=4918176.95, median=4918176.95, mean=4918176.95, 3rd qu.=4918176.95, max=4918176.95

So the original change improved the performance of deepHashCode but
degraded the performance of deepEquals. Reverting the change to
deepEquals should improve performance as well as fixing the bug.

(These benchmarks were run with http://r.android.com/666024 which
fixes the set-up.)

Test: cts-tradefed run cts-dev -m CtsLibcoreTestCases -t libcore.java.util.ArraysTest
Bug: 74236526
Change-Id: I8356c56c968aa6837463e34187427663d45e9c0e
2 files changed