脚本/功能 【更新】Computer Vision Examples for Unity 1.6.3深度传感器

Unity插件信息
插件名称: Computer Vision Examples for Unity
插件官网: https://assetstore.unity.com/packages/tools/ai-ml-integration/computer-vision-examples-for-unity-174050
版本: 1.6.3
解压密码:
素材类型: 脚本/功能



This is the next step in the evolution of depth sensor examples (incl. "Azure Kinect and Femto Bolt Examples", "Kinect-v2 Examples", etc.). Instead of a depth sensor this asset uses a plain web camera or video recording as input, and AI models to provide depth estimation, body tracking, object tracking and other streams. The package contains over thirty demo scenes.

CVE v1.6 will remain as a Lite demo version available in the Asset store. CVE v1.7 (and later) is now a Pro-version, and will only be available to select partners under a commercial license or agreement.

The avatar-demo scenes show how to utilize user-controlled avatars in your scenes, gesture demo – how to use discrete and continuous gestures in your projects, fitting room demos – how to overlay or blend the user’s body with virtual models, background removal demo – how to display user silhouettes on virtual background, etc. Short descriptions of all demo-scenes are available in the online documentation.

This package works with plain web cameras and video clips that can be played with Unity video player. It can be used in all versions of Unity – Free, Plus & Pro.

How to run the demo scenes:
1. Create a new Unity project (Unity 2023.2 or later, as required by Unity’s Sentis package).
2. Open the 'Package Manager'-window in Unity editor, click the '+'-button, select 'Install package by name' from the menu and enter 'com.unity.sentis'. Then hit Enter or the 'Install'-button. This will install 'Sentis' - the Unity package for AI model inference.
3. Import this package into the Unity project.
4. Open ‘File / Build settings’ and switch to 'PC, Mac & Linux Standalone'.
5. Check if 'Direct3D11' is the first option in the ‘Auto Graphics API for Windows’-list setting, in 'Player Settings / Other Settings / Rendering'.
6. Please first open and run a demo scene that contains body tracking from a subfolder of the 'ComputerVisionExamples/DemoScenes'-folder (e.g. AvatarDemo1-scene or OverlayDemo2-scene). Stand in front of the camera to calibrate. This is needed only once, to estimate the camera intrinsic parameters.
7. Open and run a demo scene of your choice from a subfolder of the 'ComputerVisionExamples/DemoScenes'-folder. Short descriptions of all demo-scenes are available in the online documentation.



这是深度传感器示例(包括“Azure Kinect 和 Femto Bolt 示例”、“Kinect-v2 示例”等)的下一步发展。此资源并非使用深度传感器,而是使用普通的网络摄像头或视频录制作为输入,并使用 AI 模型提供深度估计、身体追踪、物体追踪和其他数据流。该资源包包含 30 多个演示场景。

CVE v1.6 将保留精简版演示,在资源商店中提供。CVE v1.7(及更高版本)现已升级为专业版,仅向获得商业许可或协议的特定合作伙伴开放。

虚拟形象演示场景展示了如何在场景中使用用户控制的虚拟形象;手势演示——如何在项目中使用离散手势和连续手势;试衣间演示——如何将用户的身体与虚拟模型叠加或融合;背景移除演示——如何在虚拟背景上显示用户轮廓等。所有演示场景的简短描述均可在在线文档中找到。

此软件包适用于普通网络摄像头和可使用 Unity 视频播放器播放的视频片段。它适用于所有版本的 Unity——免费版、增强版和专业版。

如何运行演示场景:
1. 创建一个新的 Unity 项目(Unity 2023.2 或更高版本,以安装 Unity 的 Sentis 软件包为准)。
2. 在 Unity 编辑器中打开“软件包管理器”窗口,点击“+”按钮,从菜单中选择“按名称安装软件包”,并输入“com.unity.sentis”。然后按 Enter 键或“安装”按钮。这将安装用于 AI 模型推理的 Unity 软件包“Sentis”。
3. 将此软件包导入 Unity 项目。
4. 打开“文件/构建设置”,并切换到“PC、Mac 和 Linux 独立版本”。
5. 检查“Direct3D11”是否是“Auto Graphics API for Windows”列表设置中的首个选项(位于“Player Settings / Other Settings / Rendering”中)。
6. 请首先打开并运行“ComputerVisionExamples/DemoScenes”文件夹子文件夹中包含人体追踪的演示场景(例如 AvatarDemo1 场景或 OverlayDemo2 场景)。站在摄像头前进行校准。此操作只需执行一次,用于估算摄像头的固有参数。
7. 打开并运行“ComputerVisionExamples/DemoScenes”文件夹子文件夹中您选择的演示场景。所有演示场景的简短描述均可在在线文档中找到。




作者 cg小白兔 发表于 6 小时前
您需要登录后才可以回帖 登录 | 立即注册
cg小白兔

关注0

粉丝0

发布10392

热门作品推荐
glow
Realistic Materials Vol. 7 - Water & Ice
glow
【更新】uNode 3 Pro 3.2.2
glow
【更新】Beautify 3 - Advanced Post Proce
glow
【更新】Radiant Global Illumination 20.3
glow
【更新】Mini Puzzle Game Full Template 1
glow
【更新】Computer Vision Examples for Uni
glow
【更新】TopDown Engine 4.4
glow
免费小行星外星冰川地形有机岩石模型集末日
glow
Engine User Settings 1.1.2 适配UE5.5
glow
Multiplayer Artillery 适配UE5.0-5.6三门