Question

I wonder how Bar Code and QR Code (even character) are recognized without capturing it.I have seen in many app, when we keep our device above any of these (QR/Bar Code), the app automatically recognize it and starts processing. Is there any scanning mechanism used for this? How this can be achieved? What are mechanism involving in this?

Thanks in advance.

Was it helpful?

Solution

  1) The phone camera will be launched by the library it will autofocus and scans until it finds the decoded info from the image displayed by camera

  2) The info will be parsed by the library and it will give you the result.

the decoded info is "the bar code which has a information decoded in it"

Example for QRCode: The data is present as square 

        for barcode: the data is present as vertical lines 

The library has all the logics for detecting the type of code and decoding as per the format. Please read more the QrCode/Bar code libraries docs or implement it and learn

OTHER TIPS

You can use AVCaptureSession, e.g.:

let session = AVCaptureSession()
var qrPayload: String?

func startSession() {
    guard !started else { return }

    let output = AVCaptureMetadataOutput()
    output.setMetadataObjectsDelegate(self, queue: .main)

    let device: AVCaptureDevice?

    if #available(iOS 10.0, *) {
        device = AVCaptureDevice
            .DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .metadataObject, position: .back)
            .devices
            .first
    } else {
        device = AVCaptureDevice.devices().first { $0.position == .back }
    }

    guard
        let camera = device,
        let input = try? AVCaptureDeviceInput(device: camera),
        session.canAddInput(input),
        session.canAddOutput(output)
    else {
        // handle failures here
        return
    }

    session.addInput(input)

    session.addOutput(output)
    output.metadataObjectTypes = [.qr]

    let videoLayer = AVCaptureVideoPreviewLayer(session: session)
    videoLayer.frame = view.bounds
    videoLayer.videoGravity = .resizeAspectFill

    view.layer.addSublayer(videoLayer)

    session.startRunning()
}

And extend your view controller to conform to AVCaptureMetadataOutputObjectsDelegate:

extension QRViewController: AVCaptureMetadataOutputObjectsDelegate {
    func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
        guard
            qrPayload == nil,
            let object = metadataObjects.first as? AVMetadataMachineReadableCodeObject,
            let string = object.stringValue
        else { return }

        qrPayload = string
        print(qrPayload)

        // perhaps dismiss this view controller now that you’ve succeeded
    }
}

Note, I’m testing to make sure that the qrPayload is nil because I find that you can see metadataOutput(_:didOutput:from:) get called a few times before the view controller is dismissed.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top