Videos uploaded by user “Microsoft Research”
Image Composite Editor 2.0 adds auto-completion, stereographic projection
Image Composite Editor 2.0 introduces new features to the popular advanced panoramic image stitcher, including a new automatic image completion feature, which fills in the missing pixels when creating a panorama. Download Image Composite Editor from: http://research.microsoft.com/en-us/um/redmond/projects/ice/
Views: 215719 Microsoft Research
Applied Sciences Group: High Performance Touch
Modern touch devices allow one to interact with virtual objects. However, there is a substantial delay between when a finger moves and the display responds. Microsoft researchers, Albert Ng and Paul Dietz, have built a laboratory test system that allows us to experience the impact of different latencies on the user experience. The results help us to understand how far we still have to go in improving touch performance.
Views: 1306534 Microsoft Research
Divide and conquer: How Microsoft researchers used AI to master Ms. Pac-Man
Read more at Next at Microsoft: aka.ms/999990 Microsoft researchers have created an artificial intelligence-based system that learned how to get the maximum score on the addictive 1980s video game Ms. Pac-Man, using a divide-and-conquer method that could have broad implications for teaching AI agents to do complex tasks that augment human capabilities.
Views: 266778 Microsoft Research
Applied Sciences Group: Interactive Displays: Behind the Screen Overlay Interactions
Presented at Microsoft TechForum 2012 Behind the Screen Overlay Interactions: Behind-the-screen interaction with a transparent OLED with view-dependent, depth-corrected gaze.
Views: 607117 Microsoft Research
Microsoft tests Project Natick, self-sustaining underwater datacenter
Read more: https://aka.ms/Rqyu4m Visit the project page: https://aka.ms/S3hymz Microsoft's Project Natick is leveraging technology from submarines and working with pioneers in marine energy for the second phase of its moonshot to develop self-sufficient underwater datacenters that can deliver rapid and agile cloud services to coastal cities.
Views: 307964 Microsoft Research
Unlock deeper learning with the new Microsoft Cognitive Toolkit
Microsoft Cognitive Toolkit (formerly known as CNTK) version 2.0 is now available to Developers and Data Scientists. Cognitive Toolkit is a free, easy-to-use, open-source toolkit that trains deep learning algorithms to learn like the human brain. Learn more about the latest version of Cognitive Toolkit at https://aka.ms/CognitiveToolkit
Views: 71622 Microsoft Research
Microsoft's underwater datacenter: Project Natick
Introducing Microsoft Project Natick, a Microsoft research project to manufacture and operate an underwater datacenter. The initial experimental prototype vessel, christened the Leona Philpot after a popular Xbox game character, was operated on the seafloor approximately one kilometer off the Pacific coast of the United States from August to November of 2015. Project Natick reflects Microsoft’s ongoing quest for cloud datacenter solutions that offer rapid provisioning, lower costs, high responsiveness, and are more environmentally sustainable. Learn more about the project from our blog http://news.microsoft.com/?p=276011 and http://www.projectnatick.com.
Views: 447799 Microsoft Research
IllumiRoom Projects Images Beyond Your TV for an Immersive Gaming Experience
IllumiRoom is a proof-of-concept Microsoft Research project designed to push the boundary of living room immersive entertainment by blending our virtual and physical worlds with projected visualizations. The effects in the video are rendered in real time and are captured live -- not special effects added in post processing. IllumiRoom project was designed by: Brett Jones, Hrvoje Benko, Eyal Ofek and Andy Wilson More info: http://research.microsoft.com/projects/illumiroom/
Views: 4872899 Microsoft Research
FlexSense: A Transparent Self-Sensing Deformable Surface
We present FlexSense, a new thin-film, transparent sensing surface based on printed piezoelectric sensors, which can reconstruct complex deformations without the need for any external sensing, such as cameras. Done in collaboration with Media Interaction Lab, Hagenberg, Austria and the Institute of Surface Technologies and Photonics, Joanneum Research, FlexSense provides a fully self-contained setup which improves mobility and is not affected from occlusions. Using only a sparse set of sensors, printed on the periphery of the surface substrate, we devise two new algorithms to fully reconstruct the complex deformations of the sheet, using only these sparse sensor measurements. An evaluation shows that both proposed algorithms are capable of reconstructing complex deformations accurately. We demonstrate how FlexSense can be used for a variety of 2.5D interactions, including as a transparent cover for tablets where bending can be performed alongside touch to enable magic lens style effects, layered input, and mode switching, as well as the ability to use our device as a high degree-of-freedom input controller for gaming and beyond. http://mi-lab.org/projects/flexsense/ http://research.microsoft.com
Views: 334339 Microsoft Research
Speech Recognition Breakthrough for the Spoken, Translated Word
Chief Research Officer Rick Rashid demonstrates a speech recognition breakthrough via machine translation that converts his spoken English words into computer-generated Chinese language. The breakthrough is patterned after deep neural networks and significantly reduces errors in spoken as well as written translation. For more information on Speech Recognition and Translation, visit http://www.microsoft.com/translator/skype.aspx
Views: 1018454 Microsoft Research
SoundWave: Using the Doppler Effect to Sense Gestures
Gestures are becoming an increasingly popular means of interacting with computers. However, it is still relatively costly to deploy robust gesture-recognition sensors in existing mobile platforms. SoundWave is a real-time sensing technique that leverages a speaker and a microphone to robustly sense in-air gestures and motion around a device. It is capable of detecting a variety of gestures, and can directly control existing applications without requiring a user to wear any special sensors.
Views: 146068 Microsoft Research
Microsoft Soundscape: A Map Delivered in 3D Sound
“Soundscape fills in a lot of the mental map as you move, making it effortless and seamless to know what’s around you,” highlights Erin Lauridsen of Lighthouse for the Blind in San Francisco. Microsoft Soundscape is a research project that explores the use of 3D audio cues to help build richer awareness of your surroundings. Learn more about Soundscape at https://aka.ms/Soundscape Download the App for iOS devices: https://itunes.apple.com/us/app/microsoft-soundscape/id1240320677?ls=1&mt=8 Read the related story: https://blogs.msdn.microsoft.com/accessibility/2018/02/28/soundscape/ Audio description version here: https://youtu.be/c7DHzGMeeyI
Views: 33223 Microsoft Research
Deep Learning Demystified
John Platt of Microsoft Research discusses deep learning and what makes it different from other types of machine learning. The full interview can be found at: http://youtu.be/2SXZ-NsKfwg
Views: 43995 Microsoft Research
Project Premonition: Seeking to prevent disease outbreaks
A new Microsoft Research project aims to use autonomous drones, cutting-edge molecular biology and advanced cloud-based data analytics to detect early signs that potentially harmful diseases are spreading. http://research.microsoft.com
Views: 23581 Microsoft Research
FarmBeats tracks soil, moisture data 24/7
FarmBeats, a new agriculture research project developed by Microsoft, uses solar-powered white space-based Internet connectivity to record soil temperature and moisture levels and track them with cloud-based computing models. FarmBeats enables data-driven farming in remote areas through the use of inexpensive monitoring equipment, including cameras, to help increase the food yield of farms. Learn more: http://research.microsoft.com/
Views: 20468 Microsoft Research
Shake 'n' Sense
Shake 'n' Sense is a novel yet simple mechanical technique for mitigating the interference when two or more Kinect cameras point at the same part of a physical scene. The technique is particularly useful for Kinect, where the structured light source is not modulated. It requires only mechanical augmentation of the Kinect, without any need to modify the internal electronics, firmware or associated host software.
Views: 26154 Microsoft Research
Tech Showcase: Project Kinect for Azure depth sensor technology
We present a prototype of time-of-flight depth-sensing technology, which will be adopted in Project Kinect for Azure as well as in the next-generation of HoloLens. This depth sensor outperforms the current state-of-the-art in terms of depth precision, while maintaining both a small-form factor and high-power efficiency. The depth sensor supports various depth ranges, frame rates, and image resolutions. The user can choose between large and medium field-of-view modes. Our depth technology is used for real-time interaction scenarios such as hand or skeleton tracking and enables high-fidelity spatial mapping. It also empowers researchers and developers to build new scenarios for working with ambient intelligence using Azure AI. See more at https://www.microsoft.com/en-us/research/video/project-kinect-for-azure-depth-sensor-technology/
Views: 10688 Microsoft Research
CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in VR
CLAW extends the concept of a VR controller to a multifunctional haptic tool, using a single motor. At first glance, it looks very similar to your standard VR controller. A closer look reveals a unique motorized arm that rotates the index finger relative to the palm to simulate force feedback. CLAW acts as a multi-purpose controller that contains both the expected functionality of VR controllers (thumb buttons and joysticks, 6DOF control, index finger trigger) as well as enabling a variety of haptic renderings for the most commonly expected hand interactions: grasping objects, touching virtual surfaces and receiving force feedback. But a unique characteristic of the CLAW is its ability to adapt haptic rendering by sensing differences in the user’s grasp and the situational context of the virtual scene. As a user holds a thumb against the tip of the finger, the device simulates a grasping operation: the closing of the fingers around a virtual object is met with a resistive force, generating a sense that the object lies between the index finger and the thumb. A force sensor embedded in the index finger rest and changing the motor’s response profiles enables the sensing of objects of different materials, from full rigid wooden block to an elastic sponge. If the user holds the thumb away from a grasp pose, for example on the handle, shaping the palm instead in a pointing gesture, the controller delivers touch sensations. Moving the tip of the finger toward a surface of a virtual objects generates a resistance that pushes the finger back and prevents the finger from penetrating the virtual surface. Furthermore, a voice coil mounted under the tip of the index finger delivers small vibrations generated by surface texture as the finger slides along a virtual surface. Sensing the force applied by the user can also help to interact with virtual objects. Pushing a slider allows the experienced friction to signal preferable states or pressure can change attributes of a paint brush or pen in a drawing program. CLAW was realized with the aid of Inrak Choy, an intern from Stanford University. See more at https://www.microsoft.com/en-us/research/project/haptic-controllers/
Views: 35557 Microsoft Research
Pre-Touch Sensing for Mobile Interaction
New research uses a mobile phone’s ability to sense how you are gripping the device, as well as when and where the fingers are approaching it, to adapt interfaces on the fly. The research is outlined in the paper, "Pre-Touch Sensing for Mobile Interaction." Learn more about this and other innovative research from CHI 2016: https://blogs.msdn.microsoft.com/msr_er/2016/04/28/enhanced-virtual-reality-among-new-microsoft-research-advances-at-chi-2016/
Views: 601340 Microsoft Research
Augmenting the Field-of-View of Head-Mounted Displays with Sparse Peripheral Displays
We explore the concept of a sparse peripheral display, which augments the field-of-view of a head-mounted display with a lightweight, low-resolution, inexpensively produced array of LEDs surrounding the central high-resolution display. We show that sparse peripheral displays expand the available field-of-view up to 190º horizontal, nearly filling the human field-of-view. We prototyped two proof-of-concept implementations of sparse peripheral displays: a virtual reality headset, dubbed SparseLightVR, and an augmented reality headset, called SparseLightAR. Using SparseLightVR, we conducted a user study to evaluate the utility of our implementation, and a second user study to assess different visualization schemes in the periphery and their effect on simulator sickness. Our findings show that sparse peripheral displays are useful in conveying peripheral information and improving situational awareness, are generally preferred, and can help reduce motion sickness in nausea-susceptible people. As presented at ACM CHI 2016 on behalf of: Robert Xiao, Carnegie Mellon University Hrvoje Benko, Microsoft Research http://research.microsoft.com
Views: 57456 Microsoft Research
Programming DNA
Imagine a biological computer that operates inside a living cell, one that can be used to determine if a cell is cancerous and then trigger its death. In this project, this is done using DNA as a programmable material. Just like a computer, DNA is highly programmable into a whole range of complex behaviors. This could enable a whole a range of biotechnology applications, allowing for detection and treatment of disease at a level of precision that has not been possible so far. It can also allow the making of new medical compounds far more efficiently, and ultimately allow the making of biological computers created at the molecular scale. Learn more at: http://news.microsoft.com/stories/computingcancer
Views: 9979 Microsoft Research
Haptic Feedback at the Fingertips
Presenting fingertip haptics: touch feedback on flat keyboards and touchscreens. Imagine feeling key clicks while typing on a Touch Cover or a Windows Phone, and locating a tile on a touchscreen through its unique tactile texture. Such effects are realized with piezoelectric actuators and electrostatic haptics technology. http://research.microsoft.com
Views: 85996 Microsoft Research
Introduction to Language Understanding Intelligent Service (LUIS) - Microsoft Cognitive Services
One of the key problems in human-computer interactions is the ability of the computer to understand what a person wants, yet with LUIS, it’s no longer a problem. You can use LUIS to develop virtual assistants, chat bots, IOT experiences or any intelligent service on any device In this session, Osmar Soursour, Software Developer Engineer at ATL-Cairo, will walk you through LUIS capabilities and how you can build language understanding models and deploy them to an HTTP endpoint in few steps. You can watch our Advance session For more information about LUIS, please visit www.luis.ai/help. Learn more: https://www.microsoft.com/cognitive-services/ Microsoft Cognitive Services API collection lets you tap into an ever-growing collection of powerful AI algorithms developed by experts in their fields—intelligences like vision, speech, language, and knowledge. These REST APIs integrate into whatever language you prefer, on your platform of choice. Your iOS, Android, and Windows apps will have a consistent user experience. The APIs are constantly improving, learning, and getting smarter, so experiences are always up to date.
Views: 49926 Microsoft Research
Stroke Recovery with Kinect
Learn how Kinect is being used to help people with motor recovery after a stroke. Built using the Microsoft Kinect for Windows software development kit, the system evaluates a patient's progress throu
Views: 15833 Microsoft Research
Skype Translator demo from WPC 2014
Watch a live demonstration of speech to speech translation using Skype Translator at WPC 2014. Skype Translator is one of the newest innovations derived from decades of research in speech-recognition, automatic-translation, and machine-learning technologies. It is being developed jointly by Skype, and the Microsoft Translator team within Microsoft Research. For more information on Skype Translator, visit http://www.microsoft.com/translator/skype.aspx
Views: 244507 Microsoft Research
Fusion4D: Real-time Performance Capture of Challenging Scenes
We contribute a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time. Our algorithm supports both incremental reconstruction, improving the surface estimation over time, as well as parameterizing the nonrigid scene motion. Our approach is highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes. We demonstrate advantages over related real-time techniques that either deform an online generated template or continually fuse depth data nonrigidly into a single reference model. Finally, we show geometric reconstruction results on par with offline methods which require orders of magnitude more processing time and many more RGBD cameras.
Views: 54808 Microsoft Research
SemanticPaint: Interactive 3D Labelling and Learning at your Fingertips
We present a new interactive approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment, whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems, where capture, labelling and batch learning often takes hours or even days to perform, our approach is fully online. Read the paper: http://research.microsoft.com/en-US/projects/semanticpaint/valentin2015semanticpaint.pdf Main Contacts: Shahram Izadi, Microsoft Research: [email protected] Interactive 3D Technologies, Microsoft Research: http://research.microsoft.com/en-us/groups/i3d/ Philip Torr, University of Oxford: [email protected] Torr Vision Group, University of Oxford: http://www.robots.ox.ac.uk/~tvg/
Views: 55484 Microsoft Research
Microsoft Translator in the Classroom
In July 2017, a group of Chinese students visiting from the University of Washington stopped by the Microsoft AI and Research offices to learn about Microsoft Translator’s speech translation technology. In this video, Will Lewis - Principal Technical PM for Microsoft Translator – demonstrated how the Translator live feature and Presentation Translator add-in for PowerPoint can be used to provide live transcription and translation in the classroom. Want to learn how to start using the Translator live feature and Presentation Translator? Check out our how-to and demo videos: Get started with Microsoft Translator live feature: https://youtu.be/C5hrGKDdHkA Microsoft Translator live feature in action: https://youtu.be/16yAGeP2FuM Get Started with Presentation Translator: https://youtu.be/6Pmtl5j5C3A
Views: 10115 Microsoft Research
CRISPR.ML - Machine learning meets gene editing
DNA is the building block of life, but DNA can also contain glitches that contribute to serious and unavoidable health issues that affect billions of people. What if there was way to change your DNA to eliminate the glitches before they caused problems? Scientists currently use CRISPR, which targets genes and edits the DNA of cells. However, with a typical computer it would take over 200 years to effectively use CRISPR with the entire human genome. But working with CRISPR scientists, Microsoft is using machine learning and Azure cloud computing to rapidly scale up and decrease that amount of time to weeks, instead of centuries. To learn more, visit: https://aka.ms/wny6ih
Views: 4894 Microsoft Research
Teaching Kinect for Windows to Read Your Hands
One promising direction in the evolution of Kinect for Windows is enabling hand-gesture recognition. A machine-learning project uses a large, varied set of images of people's hands to train Kinect to determine if a hand is open or closed. This enables the development of a handgrip detector, which could launch another step forward in natural user interfaces.
Views: 116213 Microsoft Research
Project Malmo – Enabling AI technology that can collaborate with humans
Project Malmo, a platform that uses the world of Minecraft as a testing ground for advanced artificial intelligence research and innovation, is available for novice to experienced programmers on GitHub via an open-source license. The system is primarily designed to help researchers develop sophisticated AI that can do things like learn, converse, make decisions and complete complex tasks. It supports research on a range of methods such as reinforcement learning, deep learning and symbolic AI, allowing researchers to compare and integrate different approaches to advance AI understanding, reasoning, learning and communications. Try it today. Project Malmo is available at https://aka.ms/github-malmo See more on this video at https://www.microsoft.com/en-us/research/video/project-malmo-enabling-ai-technology-can-collaborate-humans/
Views: 11094 Microsoft Research
Efficient and Precise Interactive Hand Tracking
See an example of 3D hand tracking research from Microsoft, in proceedings at SIGGRAPH 2016. Researchers created virtual controls that are thin enough that you can touch your fingers together to get an experience of touching something hard. They also developed sensory experiences that allow people to push against something soft and pliant rather than hard and unforgiving, which appears to feel more authentic. Read more at: https://blogs.microsoft.com/next/?p=57052
Views: 100357 Microsoft Research
Satya Nadella introducing Seeing AI Prototype at Build 2016 conference
Microsoft CEO Satya Nadella introduces the Seeing AI prototype along with software engineer Saqib Shaikh at the Microsoft Build Developer Conference. Shaikh, who lost his sight when he was seven, helped develop the project, which uses computer vision and natural language processing to describe a person's surroundings, read text, answer questions and even identify emotions on people's faces. To get this technology into the hands of as many people as possible, this research project has evolved to be a free smartphone app, released in 2017. Visit http://SeeingAI.com to download the app.
Views: 30660 Microsoft Research
Kinect and sign language translation
See how Kinect's sign language translation capabilities help the hearing and the deaf communicate. For more information on Microsoft Translator, visit http://www.microsoft.com/translator
Views: 79994 Microsoft Research
Why I work at Microsoft Research Cambridge
Find out about working at Microsoft Research Cambridge, and meet some of our Researchers. Learn more at https://www.microsoft.com/en-us/research/lab/microsoft-research-cambridge/
Views: 7903 Microsoft Research
Azure Accelerated Machine Learning with Project Brainwave
Azure Machine Learning Accelerated Models, powered by Project Brainwave, deliver real-time AI with phenomenal performance at affordable cost. The first accelerated model, with more accelerated models coming soon, is available now at http://aka.ms/aml-real-time-ai. For context on FPGAs for Microsoft data centers, see also http://aka.ms/project-catapult Read more at https://aka.ms/Rd4bp8
Views: 5482 Microsoft Research
Microsoft Azure Overview
This video provides a comprehensive but brief overview of Microsoft Azure for research scientists. You will learn cloud computing basics, patterns and terminology, and the fundamentals of Microsoft Azure itself, including virtual machines, web sites, cloud services, and building blocks for applications. We also introduce typical patterns of design for research scientists to use the cloud.
Views: 11482 Microsoft Research
Cardiolens. "Pulse and Vital Sign Measurement in Mixed Reality using a HoloLens"
Cardiography, quantitative measurement of the functioning of the heart, traditionally requires customized obtrusive contact sensors. Using new methods photoplethysmography and ballistocardiography signals can be captured using ubiquitous sensors, such as webcams and accelerometers. However, these signals are not visible to the unaided eye. We present Cardiolens – a mixed reality system that enables real-time, hands-free measurement and visualization of blood flow and vital signs from multiple people. The system combines a front-facing webcam, imaging ballistocardiography, and remote imaging photoplethysmography methods for recovering pulse signals. A heads-up display allows users to view their own heart rate whenever they are wearing the device and the heart rate and heart rate variability of another person simply by looking at them. Cardiolens provides the wearer with a new way to understand physiological signals and has applications in human-computer interaction and in the study of social psychology. See more at https://www.microsoft.com/en-us/research/video/cardiolens-pulse-and-vital-sign-measurement-in-mixed-reality-using-a-hololens/
Views: 5581 Microsoft Research
Microsoft & Prism Skylabs: Using AI to help organizations search visual data
Prism Skylabs is using Microsoft Cognitive Services to help businesses search, analyze and categorize their videos automatically with artificial intelligence. Prism created its Prism Vision app using the Computer Vision API from Microsoft Cognitive Services. The company also developed strategic partnerships with companies around the world, helping them search through pictures, closed-circuit and security camera footage and other video for specific events, items and people. Through this, the company was able to track and measure 4.5 billion interactions with customers. Read more at: https://wp.me/p5qjnp-ilx Find more about Microsoft Cognitive Services: https://www.microsoft.com/cognitive-services/en-us/
Views: 10392 Microsoft Research
Holograph: 3-D spatiotemporal interactive data visualization
Holograph is an interactive, 3-D data-visualization research platform that can render static and dynamic data above or below the plane of the display using a variety of 3-D stereographic techniques. The platform enables rapid exploration, selection, and manipulation of complex, multidimensional data to create and refine natural user-interaction techniques and technologies, with the goal of empowering everyone to understand the growing tide of large, complex data sets. http://research.microsoft.com
Views: 49057 Microsoft Research
How To Use the Translation Features of Microsoft PowerPoint
Make your presentation slides available to a worldwide audience by using the translation features of Microsoft PowerPoint! This how to video will walk you through everything you need to know to get started. For more information on using Translator with Microsoft PowerPoint, visit: http://www.microsoft.com/translator/powerpoint.aspx
Views: 26730 Microsoft Research
Haptic Links: Bimanual Haptics for virtual reality using variable stiffness Actuation
We present Haptic Links, electro-mechanically actuated physical connections capable of rendering variable stiffness between two commodity handheld virtual reality (VR) controllers. When attached, Haptic Links can dynamically alter the forces perceived between the user’s hands to support the haptic rendering of a variety of two-handed objects and interactions. They can rigidly lock controllers in an arbitrary configuration, constrain specific degrees of freedom or directions of motion, and dynamically set stiffness along a continuous range. We demonstrate and compare three prototype Haptic Links: Chain, Layer-Hinge, and RatchetHinge. We then describe interaction techniques and scenarios leveraging the capabilities of each. Our user evaluation results confirm that users can perceive many two-handed objects or interactions as more realistic with Haptic Links than with typical unlinked VR controllers.  See more at https://www.microsoft.com/en-us/research/video/haptic-links-bimanual-haptics-virtual-reality-using-variable-stiffness-actuation/
Views: 15871 Microsoft Research
Turing Award winner Leslie Lamport
Leslie Lamport of Microsoft Research is rewarded for his outstanding contributions to computer science with the 2013 ACM A.M. Turing Award. Lamport is well known to computer scientists around the world for his foundational work in distributed computing, including the creation of the Paxos algorithm for implementing fault-tolerant distributed systems. His 1978 paper, "Time, Clocks, and the Ordering of Events in a Distributed System," is one of the most cited in the history of computer science. Lamport's immense contributions have resulted in improved correctness, performance, and reliability of computer systems used around the world today. See Lamport's "Time, Clocks, and the Ordering of Events in a Distributed System" at: http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html#time-clocks
Views: 5585 Microsoft Research
Using machine learning and AI to reduce hospital readmissions
The introduction of cloud computing and machine learning is helping researchers identify ways to reduce hospital readmissions for chronic conditions and provide corresponding actionable guidelines for patient-provider teams. Learn more: http://blogs.msdn.com/b/msr_er/archive/2015/08/28/all-that-raas-saving-lives-and-transforming-healthcare-economics.aspx http://research.microsoft.com
Views: 5145 Microsoft Research
Symposium: Deep Learning - Alex Graves
Neural Turing Machines - Alex Graves
Views: 7035 Microsoft Research
Microsoft & Human Interact: Players control the narrative in Starship Commander
Human Interact is using Microsoft Cognitive Services for Starship Commander, an interactive virtual reality science fiction game that gives players control over the narrative using real conversation. Because Human Interact uses invented words and place names, the company utilized the Custom Speech Service by Microsoft Cognitive Services, creating a custom script so that characters in the game understand what players mean when they speak. Read more at: https://wp.me/p5qjnp-ilx Find more about Microsoft Cognitive Services: https://www.microsoft.com/cognitive-services/en-us/
Views: 45887 Microsoft Research
Technology-focused university levels the playing field with AI for students who are deaf (Extended)
The distinguished Rochester Institute of Technology (RIT) in Rochester, New York, is renowned for graduating successful professionals who are deaf and hard of hearing. They account for 8.8 percent of the school’s nearly 19,000 students. To best serve them, the National Technical Institute for the Deaf was established at RIT in 1967. Here dedicated researchers—many of them deaf themselves—are working with Microsoft, using artificial intelligence and Microsoft Cognitive Services to develop a custom automatic speech recognition solution, making the world more accessible and inclusive for all students.
Views: 5973 Microsoft Research
Introduction to Speaker Recognition API  - Microsoft Cognitive Services
As voice has unique characteristics, similar to a fingerprint, that can be used to identify a user. Just like everyone has a unique fingerprint everyone has a unique voice. In this session, Mohamed Gouda, Senior Program Manager at ATL-Cairo, will talk about how Speaker Recognition RESTful APIs helps you recognize users based on their voice. Learn more: https://www.microsoft.com/cognitive-services/ Microsoft Cognitive Services API collection lets you tap into an ever-growing collection of powerful AI algorithms developed by experts in their fields—intelligences like vision, speech, language, and knowledge. These REST APIs integrate into whatever language you prefer, on your platform of choice. Your iOS, Android, and Windows apps will have a consistent user experience. The APIs are constantly improving, learning, and getting smarter, so experiences are always up to date.
Views: 11454 Microsoft Research
Leaders Eat Last : Why Some Teams Pull Together and Others Don't
Simon Sinek's mission is to help people wake up every day inspired to go to work and return home every night fulfilled by their work. His first book, Start With Why, offered the essential starting point, explaining the power of focusing on WHY we do what we do, before getting into the details of WHAT and HOW. Now Sinek is back to reveal the next step in creating happier and healthier organizations. He helps us understand, in simple terms, the biology of trust and cooperation and why they're essential to our success and fulfillment. Organizations that create environments in which trust and cooperation thrive vastly out perform their competition. And, not coincidentally, their employees love working there.
Views: 689444 Microsoft Research
RichReview: Blending Ink, Speech, and Gesture to Support Collaborative Document Review
This paper introduces a novel document annotation system that aims to enable the kinds of rich communication that usually only occur in face-to-face meetings. Our system, RichReview, lets users create annotations on top of digital documents using three main modalities: freeform inking, voice for narration, and deictic gestures in support of voice. RichReview uses novel visual representations and time-synchronization between modalities to simplify annotation access and navigation. Moreover, RichReview’s versatile support for multi-modal annotations enables users to mix and interweave different modalities in threaded conversations. A formative evaluation demonstrates early promise for the system finding support for voice, pointing, and the combination of both to be especially valuable. In addition, initial findings point to the ways in which both content and social context affect modality choice. http://research.microsoft.com
Views: 14891 Microsoft Research

da nang singles dating
cv pour job dating
btr dating quiz
best dating website california
dating 55 over