Tech —

Visual Studio Live Share gives you pair programming without the shared keyboards

Microsoft continues to build best all-in-one dev ops platform for mobile development.

With Live Share (here in Visual Studio Code) you can see what the other person is looking at, from the comfort of your own IDE.
With Live Share (here in Visual Studio Code) you can see what the other person is looking at, from the comfort of your own IDE.
Microsoft

NEW YORK—Decades after introducing IntelliSense, the code completion and information features that transform Visual Studio into something more than just a text editor, Microsoft is introducing something that it claims is just as exciting: Live Sharing.

Collaboration is critical for many developers. Having another pair of eyes look over a problematic bug can offer insight that's proving elusive; tapping the knowledge of a seasoned veteran is an important source of training and education. Some developers advocate pair programming, a system of development where two people literally share a keyboard and take turns to drive, but most feel this is intrusive and inconvenient. Ad hoc huddles around a single screen are common but usually mean that one developer has to contend with the preferences of another, hindering their productivity. Screen sharing avoids the awkward seating but also means that the sharer either has a loss of control if they give the other person keyboard and mouse access, or, if they don't, it prevents the other person from taking the initiative.

Live Share is Microsoft's solution. It provides a shared editing experience within Visual Studio and Visual Studio Code (currently only for JavaScript, TypeScript, and C#) that's similar to the shared editing found in word processors; each person can see the other's cursor and text selections; each person can make edits—but it goes further, by enabling shared debugging, too. A project can be launched under the debugger, and both people can see the call stack, examine in-scope variables, or even change values in the immediate window. Both sides can single step the debugger to advance through a program.

It provides rich collaboration—while still allowing both developers to use the environment that they're comfortable and familiar with. If you prefer to use Visual Studio, with your windows laid out just so, and still use the same key bindings as you learned for Visual C++ 6 back in the '90s, you can do so, and it doesn't matter that your peer is using Visual Studio Code on a Mac, with (ugh) vim key bindings. With Live Share, you just send a sharing request to your colleague and they can connect to your project, editor, and debugger from the comfort of their own environment.

The feature will be released as a preview for Visual Studio Code and Visual Studio at some unspecified point in the future, using a combination of updates to the core programs and extensions to round out the functionality. Microsoft stresses that the preview is still at an early stage. Technically, it allows multi-way collaboration (not just pairs), though this may not be enabled initially. At some point it will allow direct connections between systems on the same network, but, initially, it may require sharing activity to bounce through a Microsoft server.

Even at this early stage, however, it looks tremendously useful and like a huge step forward in collaboration and productivity.

Building a better dev ops platform

More immediately, today marks the general availability of Visual Studio App Center (formerly Mobile Center), Microsoft's one-stop shop for mobile application deployment and testing. Point App Center at your source repository (hosted on Microsoft's Visual Studio Team Services (VSTS) or GitHub), and it will fetch the code, set up build scripts, and run unit and integration tests.

That's standard continuous integration stuff, but App Center goes further: it can run your application tests on real hardware, both iOS and Android, to span dozens of different screen size and operating system combinations. You can even see screenshots of the app running on the various different makes and models of handset.

Once your application is passing its tests, App Center has a beta deployment system so that you can roll it out to beta testers. Need to make a quick fix to address a bug? If your app is written in JavaScript, you can use Code Push to send updated scripts to your users without needing to do a full build and reinstall. This works even for stable builds that have been submitted to their respective app stores; you can patch live applications, and we're told that Apple and Google will allow this as long as the patches aren't too radical.

App Center lets you test across a whole bunch of devices at the same time. Notice how the first three phones have crashed out to the desktop because of a bug in the app being tested.
Enlarge / App Center lets you test across a whole bunch of devices at the same time. Notice how the first three phones have crashed out to the desktop because of a bug in the app being tested.
Microsoft

Even after a successful beta test, you'll probably want to collect crash and analytics data from your users to discover problems and better understand how they're using your application. App Center has tooling for that, too.

Microsoft's goal with App Center is to make it easy for developers to adopt best practices around building, testing, reporting, and so on; App Center is a one-stop shop that handles all of these for you. Under the covers it uses VSTS. This means that if your needs grow beyond what App Center can do—for example, if you have server-side code that needs to have its builds, testing, and deployment synchronized with the client-side code—you can use the same workflows and capabilities in the full VSTS environment, while still retaining access to everything App Center can do.

Of course, you still have to develop applications in the first place. Microsoft is continuing to try to make Visual Studio the best place for app development regardless of platform. Live Player, shown earlier this year at Build, greatly streamlines the develop-build-debug loop for app development by pushing your application code to a device (iOS or Android) instantly, letting it run without needing to deploy an updated app package each time.

This is particularly compelling for honing user interfaces. Interfaces written in XAML, Microsoft's .NET interface markup language, can be shown in Live Player, and they update live; as soon as you save the XAML changes, the UI shown on the device updates accordingly. You don't even need to navigate to a particular screen within the application to test it; you can have Live Player simply show arbitrary XAML files. This makes developing and testing interfaces substantially less painful.

Increasing the reach of machine learning

Microsoft also announced Visual Studio Tools for AI, a range of features to make developing machine learning applications within Visual Studio easier. With this tooling, Visual Studio will be able to create projects that are already set up to use frameworks such as TensorFlow or Microsoft's own CNTK.

Machine learning systems build models that are generated by large-scale training, with the training done on clusters and often accelerated with GPUs or dedicated accelerator chips. The models produced can then be run on client machines. A model that's used for, say, detecting faces in video streams will still need a powerful client, but much less so than the hardware needed for the initial training.

This model training is thus a good fit for cloud computing. The Tools for AI integrate with Azure's Batch AI Service, a managed environment providing a GPU-accelerated training cluster. Training jobs can be submitted from within Visual Studio, and progress can be tracked there, too, giving insight into things like the level of GPU utilization.

Once a model has been built, there are now new ways of deploying it to devices. Microsoft has been talking up this notion of the "intelligent edge" as a counterpart to the "intelligent cloud;" this means pushing the machine-learning models into edge devices to make use of the local processing power where it makes sense to do so. A new framework, the AI Toolkit for Azure IoT Edge, is intended to streamline that process.

The company also announced a preview of Azure SQL Database Machine Learning Services, which allows machine learning models to be deployed into a SQL database and accessed directly. An example use case of this is a support ticketing system. A machine learning model could be generated to infer a priority for each ticket so that issues that seem to be urgent are prioritized automatically. With the new Azure services, this model can be run directly within the SQL database.

As much as Microsoft and other companies have been talking up machine learning, it is for many developers something of an unknown. While high-level systems such as Cognitive Services don't require much knowledge of the details of machine learning—they use prebuilt, off-the-shelf models, making them quick and easy to start using—developers who want to create their own models will need to learn and understand new frameworks and techniques.

Microsoft's attempt to fill that knowledge gap is its AI school. As it builds up its range of systems and capabilities, it hopes that more accessible machine learning will turn up in more places.

Listing image by Microsoft

Channel Ars Technica