What is RoomPlan?
RoomPlan is the newest Swift API powered by ARKit. Introduced at WWDC 2022, RoomPlan utilizes the camera and LiDAR sensor on an iPhone or iPad to perform room scanning. This allows users to get a 3D model of the room and find out information such as the size of the room, the length and height of the walls, and the type of furniture detected.
RoomPlan uses a machine learning algorithm supported by ARKit, so it can detect 16 objects as follows:
Prerequisites for Using RoomPlan
- Devices that support room scanning are iOS devices with Lidar technology, such as iPhone 12 Pro, iPhone 12 Pro Max, iPhone 13 Pro, iPhone 13 Pro Max, iPad Pro (2020 or newer), and similar devices equipped with Lidar sensors . Additionally, the device must be running iOS 16 or later.
- Xcode 14.0 or later : This version of Xcode includes the tools and framework required to build AR applications and use the RoomPlan API. To run XCode 14 a mac version is required : macOS Monterey 12.5 or later.
How to Use RoomPlan?
There are two ways to use RoomPlan:
- Scanning Experience API (RoomCaptureView)
First, we’ll cover how to get a scanning experience using the RoomCaptureView API. RoomCaptureView is a subclass of UIView that can be easily placed in your application. RoomCaptureView provides a variety of benefits, including white line animations that outline detected objects in real-time, interactive 3D models, text guides, and a 3D view of room results. - Data API (RoomCaptureSession)
The second way involves using the RoomCaptureSession API to access scan data directly. The workflow consists of Scan, Process, and Export stages:
- Scan: Set up and start a session, displaying progress and instructions. For the scan flow, you can use the RoomCaptureSession API to set up a session and display progress while scanning.
- Process: Process the scanned data and receive the model. For the process flow, we can use the Roombuilder class to process the scan data and generate the final 3D model.
- Export: Generate and export USD/USDZ model files. For the export flow, we can use the scanned data structure to display the created 3D room or export it into another file format.
Let’s practice
Okay friends, after discussing the RoomPlan theory, now we will start creating a RoomPlan project using SwiftUI. In this project I will use Roomcaptureview to gain scanning experience. Please create a new project in Xcode, and make sure the iPhone you are using supports LiDAR technology. Good luck, and don’t hesitate to ask if you have any difficulties!
- The first step, you can create a file called
RoomController.swift
. In this file, there will be the configuration logic code for running the Roomplan framework, such as starting a session and stopping a session. In writing the code inRoomController.swift
, I will divide it into several steps and provide an explanation:
- First, create a class called
RoomController
that implements theRoomCaptureViewDelegate
protocol. Since we are using this protocol,RoomController
does not conform to theNSCoding
protocol, therefore we have to implement theencode(with:)
andinit?(coder:)
methods. Although this method is required byNSCoder
, we only provide a basic unused implementation.
import RoomPlan
import SwiftUI
class RoomController : RoomCaptureViewDelegate {
func encode(with coder: NSCoder) {
fatalError("Not Needed")
}
required init?(coder: NSCoder) {
fatalError("Not Needed")
}
}
- Second, we create a singleton to access
RoomController
instances from different parts of your code without needing to create a new instance every time you need it. We will name this variableinstance
.
import RoomPlan
import SwiftUI
class RoomController : RoomCaptureViewDelegate {
...
static var instance = RoomController()
}
- Next, we define several properties, such as
captureView
which is of typeRoomCaptureView
,sessionConfig
which is of typeRoomCaptureSession.Configuration
to set the configuration when starting the scan, andfinalResult
which is of typeCapturedRoom?
to handle situations where the results scanning may not always be available or successful.
import RoomPlan
import SwiftUI
class RoomController : RoomCaptureViewDelegate {
...
static var instance = RoomController()
var captureView : RoomCaptureView
var sessionConfig : RoomCaptureSession.Configuration = RoomCaptureSession.Configuration()
var finalResult : CapturedRoom?
}
Now, let’s discuss CapturedRoom
for a moment before moving on to the next step. CapturedRoom
is a data structure that contains the results of the room scanning and analysis process carried out by RoomPlan.Here is the structure of CapturedRoom
:
At WWDC 2023 CapturedRoom has several updates such as, it has new elements, namely sections, apart from surfaces and objects.
Sections are new elements to describe areas inside a room.
In the Surface element there are several properties such as curve, edge, category and the newest one in iOS 17 is polygon.
Polygons are used to handle non-uniform surface areas, for example sloping walls or walls with blocks, curved windows, etc.
Apart from that, in the properties category itself, Roomplan with iOS 17 can detect floors. Now moving on to the element object, in the object there are properties, namely categories and attributes, which are updates in iOS 17.
Attributes in objects are useful for better describing various configurations within a category.
Surface and Object have several parent variables / properties that are owned by the two elements, namely:
- Dimensions: in the form of height, width and length of an object / surface
- Confidence : which gives you three levels of confidence for the scanned surface or object
- Transform : the 3D transform matrix
- Identifier : unique identifier of Surface / Object
- Parent : The newest property in iOS 17, It contains the parent identifier. For example, the parent of a window is a wall, the parent of a chair can be a table.
- After creating the properties, we create the
init()
method. This method is called when aRoomController
object is created. By placing the code insideinit()
, we ensure that these steps are performed every time aRoomController
object is created. This way, we can ensure thatcaptureView
and its delegates are set correctly from the start.
import RoomPlan
import SwiftUI
class RoomController : RoomCaptureViewDelegate {
...
...
init() {
captureView = RoomCaptureView(frame: .zero)
captureView.delegate = self
}
}
- Next, we implement the necessary delegate methods, namely
captureView(shouldPresent…)
andcaptureView(didPresent…)
.
shouldPresent — returns a boolean value that indicates whether the adopter wishes to post-process and present the scan results.
didPresent — provides the delegate with the post-processed scan results once the view presents them.
The startSession()
function is used to start a scanning session, while stopSession()
is used to stop a scanning session. These two functions manipulate the scanning session on captureView
.
import RoomPlan
import SwiftUI
class RoomController : RoomCaptureViewDelegate {
...
...
...
func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool {
return true
}
func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) {
finalResult = processedResult
}
// to start scanning
func startSession() {
captureView.captureSession.run(configuration: sessionConfig)
}
// to stop session
func stopSession() {
captureView.captureSession.stop()
}
}
- The final step is to create a
RoomCaptureViewRepresentable
struct that adopts theUIViewRepresentable
protocol. This struct is used to connect the worlds ofSwiftUI
andUIKit
, allowing us to useRoomCaptureView
(which comes from UIKit) inside SwiftUI. In the implementation, we implement two functions, namelymakeUIView
andupdateUIView
. ThemakeUIView
function is used to create and initialize aRoomCaptureView
, which is then used within SwiftUI.
import RoomPlan
import SwiftUI
class RoomController : RoomCaptureViewDelegate {
....
}
struct RoomCaptureViewRepresentable : UIViewRepresentable {
func makeUIView(context: Context) -> RoomCaptureView{
RoomController.instance.captureView
}
func updateUIView(_ uiView: RoomCaptureView, context: Context) {
}
}
2. The second step after we discussed the logic of using Roomplan, now we will move on to creating the view. You can edit the ContentView.swift
file as follows:
import SwiftUI
struct ContentView: View {
var body: some View {
NavigationStack{
VStack(spacing:10) {
Text("RoomPlan Feature")
.font(.title)
.bold()
Text("The roomplan feature is a feature developed by Apple and released at WWDC 2022. If you want to use roomplan, you are required to have an iOS device that supports Lidar, because roomplan cannot run without a Lidar Sensor.")
.multilineTextAlignment(.center)
Text("Tap button start to start scanning and follow the instruction")
.multilineTextAlignment(.center)
.padding(.bottom,50)
NavigationLink(destination: RoomPlanView()) {
Text("Start Scanning")
.padding(10)
}
.buttonStyle(.borderedProminent)
.cornerRadius(30)
}
.padding(.horizontal,16)
}
}
}
#Preview {
ContentView()
}
In the case of this view, I only made three explanatory texts and one button to navigate to the scanning screen which will later use our camera. You can change this view based on your own preferences, and I won’t go into details regarding the code above. If you are still confused about this, you can learn about layout in SwiftUI 😉.
3. The third step, please create a RoomplanView.swift
file, this view is a scanning screen that will use our camera and carry out the room scanning process. You can edit this view based on your needs and preferences. Here’s an example of the code:
import SwiftUI
struct RoomPlanView: View {
var roomController = RoomController.instance
@State private var doneScanning : Bool = false
var body: some View {
ZStack{
RoomCaptureViewRepresentable()
.onAppear(perform: {
roomController.startSession()
})
VStack{
Spacer()
if doneScanning == false {
Button(action: {
roomController.stopSession()
self.doneScanning = true
}, label: {
Text("Done Scanning")
.padding(10)
})
.buttonStyle(.borderedProminent)
.cornerRadius(30)
}
}
.padding(.bottom,10)
}
}
}
#Preview {
RoomPlanView()
}
Explanation:
- There is a variable roomController
, whose value is a singleton of the RoomController
class.
- Then, there is a variable doneScanning
which uses a @State
property wrapper that changes the appearance of a button while it is scanning and after.
- For the view itself, I used ZStack
to stack the RoomCaptureViewRepresentable()
that we created earlier in the RoomController
with a button which will disappear when the scanning process is complete. Inside RoomCaptureViewRepresentable()
, we call roomController.startSession()
inside onAppear()
which functions when the view appears it will immediately start the scanning session of Roomplan. And in the action inside the button, we add roomController.stopSession()
to stop the scanning session. Also changed the value of the variable doneScanning
to false.
4. Fourth, there is a small additional file that works for devices that do not have LiDAR, then a screen will appear stating that the device used is not compatible. Please create the file UnsupportedDeviceView.swift
, and here is the code example:
import SwiftUI
struct UnsupportedDeviceView: View {
var body: some View {
VStack {
Text("Unsupported Device")
.font(.title)
.foregroundColor(.red)
.padding()
Text("This device does not support Lidar.")
.padding()
}
}
}
#Preview {
UnsupportedDeviceView()
}
5. Fifth, we edit the file containing our @main
view. Because I created a project with the title “RoomPlanSwiftUI”, I will edit the ` RoomPlanSwiftUIApp.swift
file to be like this:
import SwiftUI
import RoomPlan
@main
struct RoomPlanSwiftUIApp: App {
var body: some Scene {
WindowGroup {
checkDeciveView()
}
}
}
@ViewBuilder
func checkDeciveView() -> some View {
if RoomCaptureSession.isSupported{
ContentView()
} else {
UnsupportedDeviceView()
}
}
Oh yes, don’t forget to add our camera access permissions in info.plist
, friends.
RoomPlan Updates 2023
At WWDC 2023 yesterday there were several developments in this roomplan framework, apart from the update to CapturedRoom
which I have explained above, there were several advances as follows:
- Custom ARSession Support
- Multiroom support
- Accessbility
- Representation improvements
- Enhanced export function
Custom ARSession Support
RoomCaptureSession runs with the default ARSession. New in iOS 17, RoomPlan can use custom ARSession with ARWorldTrackingConfiguration. There are several use cases that can be used with Custom ARSession Support such as :
- Increase immersion
- Add photos and videos to Roomplans scan
- Seamlessly integrate Roomplan with your ARKit app
Multiroom Support
What is MultiRoom support? In previous RoomPlan, you could do a single scan and get a 3D model for a single room. Say you have done several scans of different rooms in a house, such as a dining room, kitchen, living room, hallway, and bedroom. If you want to merge them, you’ll face some challenges. First, they are all in their own coordinate system, meaning the origin and orientation of the world coordinate is different for each room. Second, even if you stitch them manually, you’ll end up having duplicate walls and potentially duplicate objects. There are 2 ways to use Multiroom Support :
- First, use continuous ARSession.
- Second, use ARSession relocalization.
In this multiroom support there is a new API, namely:
CapturedStructure : This is a structure that combines multiple CapturedRooms into one.
Accessbility
This year, RoomPlan added an audio feedback when VoiceOver is enabled, allowing your phone to provide guidelines about scanning and to describe what it sees.
Representation improvements
Update the representation of the room plan in the form of an update from CapturedRoom which has been explained above, friends. Apart from that, in iOS 17 Roomplan can detect objects and categorize them such as various types of sofas such as single-seater sofas, L-shaped sofas, and straight sofas in a new way.
Enhanced export function
By using iOS 17, there are 2 improvements to the export function, namely:
- UUID Mapping
- Model Provider
For more detailed information, you can see the room plan update at WWDC2023, click this link.
Conclusion
Yhee, we are finally at the end of the article. Today, you have learned about roomplan theory and implemented it into your project. You can develop your room plan in more depth, especially in exporting 3D room models that you can customize the color, texture, etc. using SceneKit. I hope this article can provide detailed and useful information for everyone, if there is anything wrong in this article, please provide suggestions, and I hope you can create deeper projects using the Roomplan framework. Sorry I forgot not to provide a screen recording of this project, but here is an example of the room export results.
Thank you for taking your valuable time to read this article
Give 👏 claps on the article and follow the author for more stories.
Follow me : Intagram | LinkedIn
Link Github Project : Link GitHub
References :