Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit abbcdef4 authored by android-build-team Robot's avatar android-build-team Robot
Browse files

Snap for 6329815 from ee650ee3 to rvc-release

Change-Id: I01091d747050c65f8aae1d9906111e3480c8a85b
parents d8329645 ee650ee3
Loading
Loading
Loading
Loading
+61 −20
Original line number Diff line number Diff line
@@ -1895,27 +1895,66 @@ typedef enum acamera_metadata_tag {
     * ACAMERA_SCALER_CROP_REGION can still be used to specify the horizontal or vertical
     * crop to achieve aspect ratios different than the native camera sensor.</p>
     * <p>By using this control, the application gains a simpler way to control zoom, which can
     * be a combination of optical and digital zoom. More specifically, for a logical
     * multi-camera with more than one focal length, using a floating point zoom ratio offers
     * more zoom precision when a telephoto lens is used, as well as allowing zoom ratio of
     * less than 1.0 to zoom out to a wide field of view.</p>
     * <p>Note that the coordinate system of cropRegion, AE/AWB/AF regions, and faces now changes
     * to the effective after-zoom field-of-view represented by rectangle of (0, 0,
     * activeArrayWidth, activeArrayHeight).</p>
     * <p>For example, if ACAMERA_SENSOR_INFO_ACTIVE_ARRAY_SIZE is 4032*3024, and the preview stream
     * is configured to the same 4:3 aspect ratio, the application can achieve 2.0x zoom in
     * one of two ways:</p>
     * <ul>
     * <li>zoomRatio = 2.0, scaler.cropRegion = (0, 0, 4032, 3024)</li>
     * <li>zoomRatio = 1.0 (default), scaler.cropRegion = (1008, 756, 3024, 2268)</li>
     * be a combination of optical and digital zoom. For example, a multi-camera system may
     * contain more than one lens with different focal lengths, and the user can use optical
     * zoom by switching between lenses. Using zoomRatio has benefits in the scenarios below:
     * <em> Zooming in from a wide-angle lens to a telephoto lens: A floating-point ratio provides
     *   better precision compared to an integer value of ACAMERA_SCALER_CROP_REGION.
     * </em> Zooming out from a wide lens to an ultrawide lens: zoomRatio supports zoom-out whereas
     *   ACAMERA_SCALER_CROP_REGION doesn't.</p>
     * <p>To illustrate, here are several scenarios of different zoom ratios, crop regions,
     * and output streams, for a hypothetical camera device with an active array of size
     * <code>(2000,1500)</code>.</p>
     * <ul>
     * <li>Camera Configuration:<ul>
     * <li>Active array size: <code>2000x1500</code> (3 MP, 4:3 aspect ratio)</li>
     * <li>Output stream #1: <code>640x480</code> (VGA, 4:3 aspect ratio)</li>
     * <li>Output stream #2: <code>1280x720</code> (720p, 16:9 aspect ratio)</li>
     * </ul>
     * </li>
     * <li>Case #1: 4:3 crop region with 2.0x zoom ratio<ul>
     * <li>Zoomed field of view: 1/4 of original field of view</li>
     * <li>Crop region: <code>Rect(0, 0, 2000, 1500) // (left, top, right, bottom)</code> (post zoom)</li>
     * </ul>
     * </li>
     * <li><img alt="4:3 aspect ratio crop diagram" src="../images/camera2/metadata/android.control.zoomRatio/zoom-ratio-2-crop-43.png" /><ul>
     * <li><code>640x480</code> stream source area: <code>(0, 0, 2000, 1500)</code> (equal to crop region)</li>
     * <li><code>1280x720</code> stream source area: <code>(0, 187, 2000, 1312)</code> (letterboxed)</li>
     * </ul>
     * <p>If the application intends to set aeRegions to be top-left quarter of the preview
     * field-of-view, the ACAMERA_CONTROL_AE_REGIONS should be set to (0, 0, 2016, 1512) with
     * </li>
     * <li>Case #2: 16:9 crop region with 2.0x zoom.<ul>
     * <li>Zoomed field of view: 1/4 of original field of view</li>
     * <li>Crop region: <code>Rect(0, 187, 2000, 1312)</code></li>
     * <li><img alt="16:9 aspect ratio crop diagram" src="../images/camera2/metadata/android.control.zoomRatio/zoom-ratio-2-crop-169.png" /></li>
     * <li><code>640x480</code> stream source area: <code>(250, 187, 1750, 1312)</code> (pillarboxed)</li>
     * <li><code>1280x720</code> stream source area: <code>(0, 187, 2000, 1312)</code> (equal to crop region)</li>
     * </ul>
     * </li>
     * <li>Case #3: 1:1 crop region with 0.5x zoom out to ultrawide lens.<ul>
     * <li>Zoomed field of view: 4x of original field of view (switched from wide lens to ultrawide lens)</li>
     * <li>Crop region: <code>Rect(250, 0, 1750, 1500)</code></li>
     * <li><img alt="1:1 aspect ratio crop diagram" src="../images/camera2/metadata/android.control.zoomRatio/zoom-ratio-0.5-crop-11.png" /></li>
     * <li><code>640x480</code> stream source area: <code>(250, 187, 1750, 1312)</code> (letterboxed)</li>
     * <li><code>1280x720</code> stream source area: <code>(250, 328, 1750, 1172)</code> (letterboxed)</li>
     * </ul>
     * </li>
     * </ul>
     * <p>As seen from the graphs above, the coordinate system of cropRegion now changes to the
     * effective after-zoom field-of-view, and is represented by the rectangle of (0, 0,
     * activeArrayWith, activeArrayHeight). The same applies to AE/AWB/AF regions, and faces.
     * This coordinate system change isn't applicable to RAW capture and its related
     * metadata such as intrinsicCalibration and lensShadingMap.</p>
     * <p>Using the same hypothetical example above, and assuming output stream #1 (640x480) is
     * the viewfinder stream, the application can achieve 2.0x zoom in one of two ways:</p>
     * <ul>
     * <li>zoomRatio = 2.0, scaler.cropRegion = (0, 0, 2000, 1500)</li>
     * <li>zoomRatio = 1.0 (default), scaler.cropRegion = (500, 375, 1500, 1125)</li>
     * </ul>
     * <p>If the application intends to set aeRegions to be top-left quarter of the viewfinder
     * field-of-view, the ACAMERA_CONTROL_AE_REGIONS should be set to (0, 0, 1000, 750) with
     * zoomRatio set to 2.0. Alternatively, the application can set aeRegions to the equivalent
     * region of (1008, 756, 2016, 1512) for zoomRatio of 1.0. If the application doesn't
     * region of (500, 375, 1000, 750) for zoomRatio of 1.0. If the application doesn't
     * explicitly set ACAMERA_CONTROL_ZOOM_RATIO, its value defaults to 1.0.</p>
     * <p>This coordinate system change isn't applicable to RAW capture and its related metadata
     * such as intrinsicCalibration and lensShadingMap.</p>
     * <p>One limitation of controlling zoom using zoomRatio is that the ACAMERA_SCALER_CROP_REGION
     * must only be used for letterboxing or pillarboxing of the sensor active array, and no
     * FREEFORM cropping can be used with ACAMERA_CONTROL_ZOOM_RATIO other than 1.0.</p>
@@ -1923,7 +1962,6 @@ typedef enum acamera_metadata_tag {
     * @see ACAMERA_CONTROL_AE_REGIONS
     * @see ACAMERA_CONTROL_ZOOM_RATIO
     * @see ACAMERA_SCALER_CROP_REGION
     * @see ACAMERA_SENSOR_INFO_ACTIVE_ARRAY_SIZE
     */
    ACAMERA_CONTROL_ZOOM_RATIO =                                // float
            ACAMERA_CONTROL_START + 47,
@@ -2395,8 +2433,11 @@ typedef enum acamera_metadata_tag {
     * frames before the lens can change to the requested focal length.
     * While the focal length is still changing, ACAMERA_LENS_STATE will
     * be set to MOVING.</p>
     * <p>Optical zoom will not be supported on most devices.</p>
     * <p>Optical zoom via this control will not be supported on most devices. Starting from API
     * level 30, the camera device may combine optical and digital zoom through the
     * ACAMERA_CONTROL_ZOOM_RATIO control.</p>
     *
     * @see ACAMERA_CONTROL_ZOOM_RATIO
     * @see ACAMERA_LENS_APERTURE
     * @see ACAMERA_LENS_FOCUS_DISTANCE
     * @see ACAMERA_LENS_STATE
+80 −53
Original line number Diff line number Diff line
@@ -19,6 +19,7 @@

#include <ctype.h>
#include <inttypes.h>
#include <algorithm>
#include <memory>
#include <stdint.h>
#include <stdlib.h>
@@ -149,9 +150,13 @@ private:
    bool mIsAudio;
    sp<ItemTable> mItemTable;

    // Start offset from composition time to presentation time.
    // Support shift only for video tracks through mElstShiftStartTicks for now.
    /* Shift start offset (move to earlier time) when media_time > 0,
     * in media time scale.
     */
    uint64_t mElstShiftStartTicks;
    /* Initial start offset (move to later time), empty edit list entry
     * in media time scale.
     */
    uint64_t mElstInitialEmptyEditTicks;

    size_t parseNALSize(const uint8_t *data) const;
@@ -1215,7 +1220,6 @@ status_t MPEG4Extractor::parseChunk(off64_t *offset, int depth) {
                off64_t entriesoffset = data_offset + 8;
                uint64_t segment_duration;
                int64_t media_time;
                uint64_t empty_edit_ticks = 0;
                bool empty_edit_present = false;
                for (int i = 0; i < entry_count; ++i) {
                    switch (version) {
@@ -1247,45 +1251,37 @@ status_t MPEG4Extractor::parseChunk(off64_t *offset, int depth) {
                    }
                    // Empty edit entry would have to be first entry.
                    if (media_time == -1 && i == 0) {
                        int64_t durationUs;
                        if (AMediaFormat_getInt64(mFileMetaData, AMEDIAFORMAT_KEY_DURATION,
                                                  &durationUs)) {
                            empty_edit_ticks = segment_duration;
                            ALOGV("initial empty edit ticks: %" PRIu64, empty_edit_ticks);
                        empty_edit_present = true;
                        }
                    }
                        ALOGV("initial empty edit ticks: %" PRIu64, segment_duration);
                        /* In movie header timescale, and needs to be converted to media timescale
                         * after we get that from a track's 'mdhd' atom,
                         * which at times come after 'elst'.
                         */
                        mLastTrack->elst_initial_empty_edit_ticks = segment_duration;
                    } else if (media_time >= 0 && i == 0) {
                        ALOGV("first edit list entry");
                        mLastTrack->elst_media_time = media_time;
                        mLastTrack->elst_segment_duration = segment_duration;
                        ALOGV("segment_duration: %" PRIu64 " media_time: %" PRId64,
                              segment_duration, media_time);
                        // media_time is in media timescale as are STTS/CTTS entries.
                        mLastTrack->elst_shift_start_ticks = media_time;
                    } else if (empty_edit_present && i == 1) {
                        // Process second entry only when the first entry was an empty edit entry.
                    if (empty_edit_present && i == 1) {
                        int64_t durationUs;
                        if (AMediaFormat_getInt64(mLastTrack->meta, AMEDIAFORMAT_KEY_DURATION,
                                                  &durationUs) &&
                            mHeaderTimescale != 0) {
                            // Support only segment_duration<=track_duration and media_time==0 case.
                            uint64_t segmentDurationUs =
                                    segment_duration * 1000000 / mHeaderTimescale;
                            if (segmentDurationUs == 0 || segmentDurationUs > durationUs ||
                                media_time != 0) {
                                ALOGW("for now, unsupported second entry in empty edit list");
                            }
                        }
                        ALOGV("second edit list entry");
                        mLastTrack->elst_media_time = media_time;
                        mLastTrack->elst_segment_duration = segment_duration;
                        ALOGV("segment_duration: %" PRIu64 " media_time: %" PRId64,
                              segment_duration, media_time);
                        mLastTrack->elst_shift_start_ticks = media_time;
                    } else {
                        ALOGW("for now, unsupported entry in edit list %" PRIu32, entry_count);
                    }
                }
                // save these for later, because the elst atom might precede
                // the atoms that actually gives us the duration and sample rate
                // needed to calculate the padding and delay values
                mLastTrack->elst_needs_processing = true;
                if (empty_edit_present) {
                    /* In movie header timescale, and needs to be converted to media timescale once
                     * we get that from a track's 'mdhd' atom, which at times come after 'elst'.
                     */
                    mLastTrack->elst_initial_empty_edit_ticks = empty_edit_ticks;
                } else {
                    mLastTrack->elst_media_time = media_time;
                    mLastTrack->elst_segment_duration = segment_duration;
                    ALOGV("segment_duration: %" PRIu64 " media_time: %" PRId64, segment_duration,
                          media_time);
                }
            }
            break;
        }
@@ -4324,9 +4320,9 @@ MediaTrackHelper *MPEG4Extractor::getTrack(size_t index) {
        }
    }

    // media_time is in media timescale as are STTS/CTTS entries.
    track->elst_shift_start_ticks = track->elst_media_time;
    ALOGV("track->elst_shift_start_ticks :%" PRIu64, track->elst_shift_start_ticks);

    uint64_t elst_initial_empty_edit_ticks = 0;
    if (mHeaderTimescale != 0) {
        // Convert empty_edit_ticks from movie timescale to media timescale.
        uint64_t elst_initial_empty_edit_ticks_mul = 0, elst_initial_empty_edit_ticks_add = 0;
@@ -4337,15 +4333,15 @@ MediaTrackHelper *MPEG4Extractor::getTrack(size_t index) {
            ALOGE("track->elst_initial_empty_edit_ticks overflow");
            return nullptr;
        }
        track->elst_initial_empty_edit_ticks = elst_initial_empty_edit_ticks_add / mHeaderTimescale;
        ALOGV("track->elst_initial_empty_edit_ticks :%" PRIu64,
              track->elst_initial_empty_edit_ticks);
        elst_initial_empty_edit_ticks = elst_initial_empty_edit_ticks_add / mHeaderTimescale;
    }
    ALOGV("elst_initial_empty_edit_ticks in MediaTimeScale :%" PRIu64,
          elst_initial_empty_edit_ticks);

    MPEG4Source* source =
            new MPEG4Source(track->meta, mDataSource, track->timescale, track->sampleTable,
                            mSidxEntries, trex, mMoofOffset, itemTable,
                            track->elst_shift_start_ticks, track->elst_initial_empty_edit_ticks);
                            track->elst_shift_start_ticks, elst_initial_empty_edit_ticks);
    if (source->init() != OK) {
        delete source;
        return NULL;
@@ -5885,9 +5881,22 @@ media_status_t MPEG4Source::read(
                    break;
            }
            if( mode != ReadOptions::SEEK_FRAME_INDEX) {
                seekTimeUs += ((long double)mElstShiftStartTicks * 1000000) / mTimescale;
                ALOGV("shifted seekTimeUs :%" PRId64 ", mElstShiftStartTicks:%" PRIu64, seekTimeUs,
                      mElstShiftStartTicks);
                int64_t elstInitialEmptyEditUs = 0, elstShiftStartUs = 0;
                if (mElstInitialEmptyEditTicks > 0) {
                    elstInitialEmptyEditUs = ((long double)mElstInitialEmptyEditTicks * 1000000) /
                                             mTimescale;
                    /* Sample's composition time from ctts/stts entries are non-negative(>=0).
                     * Hence, lower bound on seekTimeUs is 0.
                     */
                    seekTimeUs = std::max(seekTimeUs - elstInitialEmptyEditUs, (int64_t)0);
                }
                if (mElstShiftStartTicks > 0) {
                    elstShiftStartUs = ((long double)mElstShiftStartTicks * 1000000) / mTimescale;
                    seekTimeUs += elstShiftStartUs;
                }
                ALOGV("shifted seekTimeUs:%" PRId64 ", elstInitialEmptyEditUs:%" PRIu64
                      ", elstShiftStartUs:%" PRIu64, seekTimeUs, elstInitialEmptyEditUs,
                      elstShiftStartUs);
            }

            uint32_t sampleIndex;
@@ -5933,7 +5942,12 @@ media_status_t MPEG4Source::read(

            if (mode == ReadOptions::SEEK_CLOSEST
                || mode == ReadOptions::SEEK_FRAME_INDEX) {
                if (mElstInitialEmptyEditTicks > 0) {
                    sampleTime += mElstInitialEmptyEditTicks;
                }
                if (mElstShiftStartTicks > 0){
                    sampleTime -= mElstShiftStartTicks;
                }
                targetSampleTimeUs = (sampleTime * 1000000ll) / mTimescale;
            }

@@ -5976,12 +5990,12 @@ media_status_t MPEG4Source::read(
            if(err == OK) {
                if (mElstInitialEmptyEditTicks > 0) {
                    cts += mElstInitialEmptyEditTicks;
                } else {
                }
                if (mElstShiftStartTicks > 0) {
                    // cts can be negative. for example, initial audio samples for gapless playback.
                    cts -= (int64_t)mElstShiftStartTicks;
                }
            }

        } else {
            err = mItemTable->getImageOffsetAndSize(
                    options && options->getSeekTo(&seekTimeUs, &mode) ?
@@ -6261,10 +6275,22 @@ media_status_t MPEG4Source::fragmentedRead(
    int64_t seekTimeUs;
    ReadOptions::SeekMode mode;
    if (options && options->getSeekTo(&seekTimeUs, &mode)) {

        seekTimeUs += ((long double)mElstShiftStartTicks * 1000000) / mTimescale;
        ALOGV("shifted seekTimeUs :%" PRId64 ", mElstShiftStartTicks:%" PRIu64, seekTimeUs,
              mElstShiftStartTicks);
        int64_t elstInitialEmptyEditUs = 0, elstShiftStartUs = 0;
        if (mElstInitialEmptyEditTicks > 0) {
            elstInitialEmptyEditUs = ((long double)mElstInitialEmptyEditTicks * 1000000) /
                                     mTimescale;
            /* Sample's composition time from ctts/stts entries are non-negative(>=0).
             * Hence, lower bound on seekTimeUs is 0.
             */
            seekTimeUs = std::max(seekTimeUs - elstInitialEmptyEditUs, (int64_t)0);
        }
        if (mElstShiftStartTicks > 0){
            elstShiftStartUs = ((long double)mElstShiftStartTicks * 1000000) / mTimescale;
            seekTimeUs += elstShiftStartUs;
        }
        ALOGV("shifted seekTimeUs:%" PRId64 ", elstInitialEmptyEditUs:%" PRIu64
              ", elstShiftStartUs:%" PRIu64, seekTimeUs, elstInitialEmptyEditUs,
              elstShiftStartUs);

        int numSidxEntries = mSegments.size();
        if (numSidxEntries != 0) {
@@ -6355,7 +6381,8 @@ media_status_t MPEG4Source::fragmentedRead(

        if (mElstInitialEmptyEditTicks > 0) {
            cts += mElstInitialEmptyEditTicks;
        } else {
        }
        if (mElstShiftStartTicks > 0) {
            // cts can be negative. for example, initial audio samples for gapless playback.
            cts -= (int64_t)mElstShiftStartTicks;
        }
+2 −2
Original line number Diff line number Diff line
@@ -88,9 +88,9 @@ private:
         */
        int64_t elst_media_time;
        uint64_t elst_segment_duration;
        // Shift start offset only when media_time > 0.
        // Shift start offset (move to earlier time) when media_time > 0.
        uint64_t elst_shift_start_ticks;
        // Initial start offset, empty edit list entry.
        // Initial start offset (move to later time), from empty edit list entry.
        uint64_t elst_initial_empty_edit_ticks;
        bool subsample_encryption;

+4 −2
Original line number Diff line number Diff line
@@ -151,11 +151,13 @@ status_t AudioEffect::set(const effect_uuid_t *type,
    // audio flinger will not be retained. initCheck() will return the creation status
    // but all other APIs will return invalid operation.
    if (probe || iEffect == 0 || (mStatus != NO_ERROR && mStatus != ALREADY_EXISTS)) {
        char typeBuffer[64], uuidBuffer[64];
        char typeBuffer[64] = {}, uuidBuffer[64] = {};
        guidToString(type, typeBuffer, sizeof(typeBuffer));
        guidToString(uuid, uuidBuffer, sizeof(uuidBuffer));
        ALOGE_IF(!probe, "set(): AudioFlinger could not create effect %s / %s, status: %d",
                typeBuffer, uuidBuffer, mStatus);
                type != nullptr ? typeBuffer : "NULL",
                uuid != nullptr ? uuidBuffer : "NULL",
                mStatus);
        if (!probe && iEffect == 0) {
            mStatus = NO_INIT;
        }
+1 −2
Original line number Diff line number Diff line
@@ -886,7 +886,6 @@ status_t AudioSystem::getOutputForAttr(audio_attributes_t *attr,
                                        audio_stream_type_t *stream,
                                        pid_t pid,
                                        uid_t uid,
                                        const String16& opPackageName,
                                        const audio_config_t *config,
                                        audio_output_flags_t flags,
                                        audio_port_handle_t *selectedDeviceId,
@@ -896,7 +895,7 @@ status_t AudioSystem::getOutputForAttr(audio_attributes_t *attr,
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getOutputForAttr(attr, output, session, stream, pid, uid,
                                 opPackageName, config,
                                 config,
                                 flags, selectedDeviceId, portId, secondaryOutputs);
}

Loading