AppleScript is a scripting language created by Apple Inc. That facilitates automated control over scriptable Mac applications.First introduced in System 7, it is currently included in all versions of macOS as part of a package of system automation tools. The term 'AppleScript' may refer to the language itself, to an individual script written in the language, or, informally, to the macOS Open.
- Open Mac App Via An Js Function Example
- Open Mac App Via An Js Function Key
- Open Mac App Via An Js Function Allows
- Open Mac App Via An Js Function Diagram
- Node Js App Use
- Run Node Js App
- Open or quit Terminal on Mac. Each window in Terminal represents an instance of a shell process. The window contains a prompt that indicates you can enter a command. The prompt you see depends on your Terminal and shell preferences, but it often includes the name of the host you’re logged in to, your current working folder, your user name.
- Application/ cefsimple application executable chrome-sandbox html.
- Node.js includes a minimal set of functionality in the core runtime. Developers often use 3rd party modules to provide additional functionality when developing a Node.js application. In this tutorial you will create a new application using the Express module, which provides an MVC framework for creating Node.js web applications.
- Exporting data to Excel is very useful for every enterprise on data list web application. Every time, export data using custom export feature helps to download table data list for offline use as excel file like csv format. As a web Developer, I was required to do that in various live project. Excel format for exporting data in file is ideal in every enterprises. Mostly we used server-side.
Visual Studio allows you to easily create a Node.js project and experience IntelliSense and other built-in features that support Node.js. In this tutorial for Visual Studio, you create a Node.js web application project from a Visual Studio template. Then, you create a simple app using React.
In this tutorial, you learn how to:
- Create a Node.js project
- Add npm packages
- Add React code to your app
- Transpile JSX
- Attach the debugger
Before you begin
Here's a quick FAQ to introduce you to some key concepts.
What is Node.js?
Node.js is a server-side JavaScript runtime environment that executes JavaScript server-side.
What is npm?
npm is the default package manager for the Node.js. The package manager makes it easier for programmers to publish and share source code of Node.js libraries and is designed to simplify installation, updating, and uninstallation of libraries.
What is React?
React is a front-end framework to create a UI.
What is JSX?
JSX is a JavaScript syntax extension, typically used with React to describe UI elements. JSX code must be transpiled to plain JavaScript before it can run in a browser.
What is webpack?
webpack bundles JavaScript files so they can run in a browser. It can also transform or package other resources and assets. It is often used to specify a compiler, such as Babel or TypeScript, to transpile JSX or TypeScript code to plain JavaScript.
Prerequisites
-
You must have Visual Studio installed and the Node.js development workload.If you haven't already installed Visual Studio 2019, go to the Visual Studio downloads page to install it for free.If you haven't already installed Visual Studio 2017, go to the Visual Studio downloads page to install it for free.If you need to install the workload but already have Visual Studio, go to Tools > Get Tools and Features.., which opens the Visual Studio Installer. Choose the Node.js development workload, then choose Modify.
-
You must have the Node.js runtime installed.This tutorial was tested with version 12.6.2.If you don't have it installed, we recommend you install the LTS version from the Node.js website for best compatibility with outside frameworks and libraries. Node.js is built for 32-bit and 64-bit architectures. The Node.js tools in Visual Studio, included in the Node.js workload, support both versions. Only one is required and the Node.js installer only supports one being installed at a time.In general, Visual Studio automatically detects the installed Node.js runtime. If it does not detect an installed runtime, you can configure your project to reference the installed runtime in the properties page (after you create a project, right-click the project node, choose Properties, and set the Node.exe path). You can use a global installation of Node.js or you can specify the path to a local interpreter in each of your Node.js projects.
Create a project
First, create a Node.js web application project.
-
Open Visual Studio.
-
Create a new project.Press Esc to close the start window. Type Ctrl + Q to open the search box, type Node.js, then choose Blank Node.js Web Application - JavaScript. (Although this tutorial uses the TypeScript compiler, the steps require that you start with the JavaScript template.)In the dialog box that appears, choose Create.From the top menu bar, choose File > New > Project. In the left pane of the New Project dialog box, expand JavaScript, then choose Node.js. In the middle pane, choose Blank Node.js Web Application, type the name NodejsWebAppBlank, then choose OK.If you don't see the Blank Node.js Web Application project template, you must add the Node.js development workload. For detailed instructions, see the Prerequisites.Visual Studio creates the new solution and opens your project.(1) Highlighted in bold is your project, using the name you gave in the New Project dialog box. In the file system, this project is represented by a .njsproj file in your project folder. You can set properties and environment variables associated with the project by right-clicking the project and choosing Properties. You can do round-tripping with other development tools, because the project file does not make custom changes to the Node.js project source.(2) At the top level is a solution, which by default has the same name as your project. A solution, represented by a .sln file on disk, is a container for one or more related projects.(3) The npm node shows any installed npm packages. You can right-click the npm node to search for and install npm packages using a dialog box or install and update packages using the settings in package.json and right-click options in the npm node.(4) package.json is a file used by npm to manage package dependencies and package versions for locally-installed packages. For more information, see Manage npm packages.(5) Project files such as server.js show up under the project node. server.js is the project startup file and that is why it shows up in bold. You can set the startup file by right-clicking a file in the project and selecting Set as Node.js startup file.
Add npm packages
This app requires a number of npm modules to run correctly.
- react
- react-dom
- express
- path
- ts-loader
- typescript
- webpack
- webpack-cli
-
In Solution Explorer (right pane), right-click the npm node in the project and choose Install New npm Packages.In the Install New npm Packages dialog box, you can choose to install the most current package version or specify a version. If you choose to install the current version of these packages, but run into unexpected errors later, you may want to install the exact package versions described later in these steps.
-
In the Install New npm Packages dialog box, search for the react package, and select Install Package to install it.Select the Output window to see progress on installing the package (select Npm in the Show output from field). When installed, the package appears under the npm node.The project's package.json file is updated with the new package information including the package version.
-
Instead of using the UI to search for and add the rest of the packages one at a time, paste the following code into package.json. To do this, add a
dependencies
section with this code:If there is already adependencies
section in your version of the blank template, just replace it with the preceding JSON code. For more information on use of this file, see package.json configuration. -
Save the changes.
-
Right-click npm node in your project and choose Install npm Packages.This command runs the npm install command directly.In the lower pane, select the Output window to see progress on installing the packages. Installation may take a few minutes and you may not see results immediately. To see the output, make sure that you select Npm in the Show output from field in the Output window.Here are the npm modules as they appear in Solution Explorer after they are installed.NoteIf you prefer to install npm packages using the command line, right-click the project node and choose Open Command Prompt Here. Use standard Node.js commands to install packages.
Add project files
In these steps, you add four new files to your project.
- app.tsx
- webpack-config.js
- index.html
- tsconfig.json
For this simple app, you add the new project files in the project root. (In most apps, you typically add the files to subfolders and adjust relative path references accordingly.)
-
In Solution Explorer, right-click the project NodejsWebAppBlank and choose Add > New Item.
-
In the Add New Item dialog box, choose TypeScript JSX file, type the name app.tsx, and select Add or OK.
-
Repeat these steps to add webpack-config.js. Instead of a TypeScript JSX file, choose JavaScript file.
-
Repeat the same steps to add index.html to the project. Instead of a JavaScript file, choose HTML file.
-
Repeat the same steps to add tsconfig.json to the project. Instead of a JavaScript file, choose TypeScript JSON Configuration file.
Add app code
-
Open server.js and replace the existing code with the following code:The preceding code uses Express to start Node.js as your web application server. This code sets the port to the port number configured in the project properties (by default, the port is configured to 1337 in the properties). To open the project properties, right-click the project in Solution Explorer and choose Properties.
-
Open app.tsx and add the following code:The preceding code uses JSX syntax and React to display a simple message.
-
Open index.html and replace the body section with the following code:This HTML page loads app-bundle.js, which contains the JSX and React code transpiled to plain JavaScript. Currently, app-bundle.js is an empty file. In the next section, you configure options to transpile the code.
Configure webpack and TypeScript compiler options
In the previous steps, you added webpack-config.js to the project. Next, you add webpack configuration code. You will add a simple webpack configuration that specifies an input file (app.tsx) and an output file (app-bundle.js) for bundling and transpiling JSX to plain JavaScript. For transpiling, you also configure some TypeScript compiler options. This code is a basic configuration that is intended as an introduction to webpack and the TypeScript compiler.
-
In Solution Explorer, open webpack-config.js and add the following code.The webpack configuration code instructs webpack to use the TypeScript loader to transpile the JSX.
-
Open tsconfig.json and replace the default code with the following code, which specifies the TypeScript compiler options:app.tsx is specified as the source file.
Transpile the JSX
-
In Solution Explorer, right-click the project node and choose Open Command Prompt Here.
-
In the command prompt, type the following command:
node_modules.binwebpack app.tsx --config webpack-config.js
The command prompt window shows the result.If you see any errors instead of the preceding output, you must resolve them before your app will work. If your npm package versions are different than the versions shown in this tutorial, that can be a source of errors. One way to fix errors is to use the exact versions shown in the earlier steps. Also, if one or more of these package versions has been deprecated and results in an error, you may need to install a more recent version to fix errors. For information on using package.json to control npm package versions, see package.json configuration. -
In Solution Explorer, right-click the project node and choose Add > Existing Folder, then choose the dist folder and choose Select Folder.Visual Studio adds the dist folder to the project, which contains app-bundle.js and app-bundle.js.map.
-
Open app-bundle.js to see the transpiled JavaScript code.
-
If prompted to reload externally modified files, select Yes to All.
Each time you make changes to app.tsx, you must rerun the webpack command. To automate this step, add a build script to transpile the JSX.
Add a build script to transpile the JSX
Starting in Visual Studio 2019, a build script is required. Instead of transpiling JSX at the command line (as shown in the preceding section), you can transpile JSX when building from Visual Studio.
-
Open package.json and add the following section after the
dependencies
section:
Run the app
-
Select either Web Server (Google Chrome) or Web Server (Microsoft Edge) as the current debug target.If Chrome is available on your machine, but does not show up as an option, choose Browse With from the debug target dropdown list, and select Chrome as the default browser target (choose Set as Default).
-
To run the app, press F5 (Debug > Start Debugging) or the green arrow button.A Node.js console window opens that shows the port on which the debugger is listening.Visual Studio starts the app by launching the startup file, server.js.
-
Close the browser window.
-
Close the console window.
Set a breakpoint and run the app
-
In server.js, click in the gutter to the left of the
staticPath
declaration to set a breakpoint:Breakpoints are the most basic and essential feature of reliable debugging. A breakpoint indicates where Visual Studio should suspend your running code so you can take a look at the values of variables, or the behavior of memory, or whether or not a branch of code is getting run. -
To run the app, press F5 (Debug > Start Debugging).The debugger pauses at the breakpoint you set (the current statement is marked in yellow). Now, you can inspect your app state by hovering over variables that are currently in scope, using debugger windows like the Locals and Watch windows.
-
Press F5 to continue the app.
-
If you want to use the Chrome Developer Tools or F12 Tools for Microsoft Edge, press F12. You can use these tools to examine the DOM and interact with the app using the JavaScript Console.
-
Close the web browser and the console.
Set and hit a breakpoint in the client-side React code
In the preceding section, you attached the debugger to server-side Node.js code. To attach the debugger from Visual Studio and hit breakpoints in client-side React code, the debugger needs help to identify the correct process. Here is one way to enable this.
Prepare the browser for debugging
For this scenario, use either Microsoft Edge (Chromium), currently named Microsoft Edge Beta in the IDE, or Chrome.
-
Close all windows for the target browser.Other browser instances can prevent the browser from opening with debugging enabled. (Browser extensions may be running and preventing full debug mode, so you may need to open Task Manager to find unexpected instances of Chrome.)For Microsoft Edge (Chromium), also shut down all instances of Chrome. Because both browsers share the chromium code base, this gives the best results.
-
Start your browser with debugging enabled.Starting in Visual Studio 2019, you can set the
--remote-debugging-port=9222
flag at browser launch by selecting Browse With.. > from the Debug toolbar, then choosing Add, and then setting the flag in the Arguments field. Use a different friendly name for the browser such as Edge with Debugging or Chrome with Debugging. For details, see the Release Notes.Alternatively, open the Run command from the Windows Start button (right-click and choose Run), and enter the following command:msedge --remote-debugging-port=9222
or,chrome.exe --remote-debugging-port=9222
Open the Run command from the Windows Start button (right-click and choose Run), and enter the following command:chrome.exe --remote-debugging-port=9222
This starts your browser with debugging enabled.The app is not yet running, so you get an empty browser page.
Attach the debugger to client-side script
-
Switch to Visual Studio and then set a breakpoint in your source code, either app-bundle.js or app.tsx.For app-bundle.js, set the breakpoint in the
render()
function as shown in the following illustration:To find therender()
function in the transpiled app-bundle.js file, use Ctrl+F (Edit > Find and Replace > Quick Find).For app.tsx, set the breakpoint inside therender()
function, on thereturn
statement. -
If you are setting the breakpoint in the .tsx file (rather than app-bundle.js), you need to update webpack-config.js. Replace the following code:with this code:This is a development-only setting to enable debugging in Visual Studio. This setting allows you to override the generated references in the source map file, app-bundle.js.map, when building the app. By default, webpack references in the source map file include the webpack:/// prefix, which prevents Visual Studio from finding the source file, app.tsx. Specifically, when you make this change, the reference to the source file, app.tsx, gets changed from webpack:///./app.tsx to ./app.tsx, which enables debugging.
-
Select your target browser as the debug target in Visual Studio, then press Ctrl+F5 (Debug > Start Without Debugging) to run the app in the browser.If you created a browser configuration with a friendly name, choose that as your debug target.The app opens in a new browser tab.
-
Choose Debug > Attach to Process.TipStarting in Visual Studio 2017, once you attach to the process the first time by following these steps, you can quickly reattach to the same process by choosing Debug > Reattach to Process.
-
In the Attach to Process dialog box, get a filtered list of browser instances that you can attach to.In Visual Studio 2019, choose the correct debugger for your target browser, JavaScript (Chrome) or JavaScript (Microsoft Edge - Chromium) in the Attach to field, type chrome or edge in the filter box to filter the search results.In Visual Studio 2017, choose Webkit code in the Attach to field, type chrome in the filter box to filter the search results.
-
Select the browser process with the correct host port (localhost in this example), and select Attach.The port (1337) may also appear in the Title field to help you select the correct browser instance.The following example shows how this looks for the Microsoft Edge (Chromium) browser.You know the debugger has attached correctly when the DOM Explorer and the JavaScript Console open in Visual Studio. These debugging tools are similar to Chrome Developer Tools and F12 Tools for Microsoft Edge.TipIf the debugger does not attach and you see the message 'Unable to attach to the process. An operation is not legal in the current state.', use the Task Manager to close all instances of the target browser before starting the browser in debugging mode. Browser extensions may be running and preventing full debug mode.
-
Because the code with the breakpoint already executed, refresh your browser page to hit the breakpoint.While paused in the debugger, you can examine your app state by hovering over variables and using debugger windows. You can advance the debugger by stepping through code (F5, F10, and F11). For more information on basic debugging features, see First look at the debugger.You may hit the breakpoint in either app-bundle.js or its mapped location in app.tsx, depending on which steps you followed previously, along with your environment and browser state. Either way, you can step through code and examine variables.
-
If you need to break into code in app.tsx and are unable to do it, use Attach to Process as described in the previous steps to attach the debugger. Make sure you that your environment is set up correctly:
-
You closed all browser instances, including Chrome extensions (using the Task Manager), so that you can run the browser in debug mode. Make sure you start the browser in debug mode.
-
Make sure that your source map file includes a reference to ./app.tsx and not webpack:///./app.tsx, which prevents the Visual Studio debugger from locating app.tsx.Alternatively, if you need to break into code in app.tsx and are unable to do it, try using the
debugger;
statement in app.tsx, or set breakpoints in the Chrome Developer Tools (or F12 Tools for Microsoft Edge) instead.
-
-
If you need to break into code in app-bundle.js and are unable to do it, remove the source map file, app-bundle.js.map.
-
Next steps
WebRTC is an open source project to enable realtime communication of audio, video and data in Web and native apps.
WebRTC has several JavaScript APIs — click the links to see demos.
getUserMedia()
: capture audio and video.MediaRecorder
: record audio and video.RTCPeerConnection
: stream audio and video between users.RTCDataChannel
: stream data between users.
Where can I use WebRTC?
In Firefox, Opera and in Chrome on desktop and Android. WebRTC is also available for native apps on iOS and Android.
What is signaling?
WebRTC uses RTCPeerConnection to communicate streaming data between browsers, but also needs a mechanism to coordinate communication and to send control messages, a process known as signaling. Signaling methods and protocols are not specified by WebRTC. In this codelab you will use Socket.IO for messaging, but there are many alternatives.
What are STUN and TURN?
WebRTC is designed to work peer-to-peer, so users can connect by the most direct route possible. However, WebRTC is built to cope with real-world networking: client applications need to traverse NAT gateways and firewalls, and peer to peer networking needs fallbacks in case direct connection fails. As part of this process, the WebRTC APIs use STUN servers to get the IP address of your computer, and TURN servers to function as relay servers in case peer-to-peer communication fails. ( WebRTC in the real world explains in more detail.)
Is WebRTC secure?
Encryption is mandatory for all WebRTC components, and its JavaScript APIs can only be used from secure origins (HTTPS or localhost). Signaling mechanisms aren't defined by WebRTC standards, so it's up to you make sure to use secure protocols.
Looking for more? Check out the resources at webrtc.org/start.
Build an app to get video and take snapshots with your webcam and share them peer-to-peer via WebRTC. Along the way you'll learn how to use the core WebRTC APIs and set up a messaging server using Node.js.
What you'll learn
- Get video from your webcam
- Stream video with RTCPeerConnection
- Stream data with RTCDataChannel
- Set up a signaling service to exchange messages
- Combine peer connection and signaling
- Take a photo and share it via a data channel
What you'll need
- Chrome 47 or above
- Web Server for Chrome, or use your own web server of choice.
- The sample code
- A text editor
- Basic knowledge of HTML, CSS and JavaScript
Download the code
If you're familiar with git, you can download the code for this codelab from GitHub by cloning it:
Alternatively, click the following button to download a .zip file of the code:
Open the downloaded zip file. This will unpack a project folder (adaptive-web-media) that contains one folder for each step of this codelab, along with all of the resources you will need.
You'll be doing all your coding work in the directory named work.
The step-nn folders contain a finished version for each step of this codelab. They are there for reference.
Install and verify web server
While you're free to use your own web server, this codelab is designed to work well with the Chrome Web Server. If you don't have that app installed yet, you can install it from the Chrome Web Store.
After installing the Web Server for Chrome app, click on the Chrome Apps shortcut from the bookmarks bar, a New Tab page, or from the App Launcher:
Click on the Web Server icon:
Next, you'll see this dialog, which allows you to configure your local web server:
Click the CHOOSE FOLDER button, and select the work folder you just created. This will enable you to view your work in progress in Chrome via the URL highlighted in the Web Server dialog in the Web Server URL(s) section.
Under Options, check the box next to Automatically show index.html as shown below:
Then stop and restart the server by sliding the toggle labeled Web Server: STARTED to the left and then back to the right.
Now visit your work site in your web browser by clicking on the highlighted Web Server URL. You should see a page that looks like this, which corresponds to work/index.html:
Obviously, this app is not yet doing anything interesting — so far, it's just a minimal skeleton we're using to make sure your web server is working properly. You'll add functionality and layout features in subsequent steps.
From this point forward, all testing and verification should be performed using this web server setup. You'll usually be able to get away with simply refreshing your test browser tab.
What you'll learn
In this step you'll find out how to:
- Get a video stream from your webcam.
- Manipulate stream playback.
- Use CSS and SVG to manipulate video.
A complete version of this step is in the step-01 folder.
A dash of HTML..
Add a
video
element and a script
element to index.html in your work directory:
..and a pinch of JavaScript
Add the following to main.js in your js folder:
All the JavaScript examples here use
'use strict';
to avoid common coding gotchas.
Find out more about what that means in ECMAScript 5 Strict Mode, JSON, and More.
Try it out
Open index.html in your browser and you should see something like this (featuring the view from your webcam, of course!):
How it works
Giant clock app mac. Following the
getUserMedia()
call, the browser requests permission from the user to access their camera (if this is the first time camera access has been requested for the current origin). If successful, a MediaStream is returned, which can be used by a media element via the srcObject
attribute:
The
constraints
argument allows you to specify what media to get. In this example, video only, since audio is disabled by default:
You can use constraints for additional requirements such as video resolution:
The MediaTrackConstraints specification lists all potential constraint types, though not all options are supported by all browsers. If the resolution requested isn't supported by the currently selected camera,
getUserMedia()
will be rejected with an OverconstrainedError
and the user will not be prompted to give permission to access their camera.
You can view a demo showing how to use constraints to request different resolutions here, and a demo using constraints to choose camera and microphone here.
If
getUserMedia()
is successful, the video stream from the webcam is set as the source of the video element:
Bonus points
- The
localStream
object passed togetUserMedia()
is in global scope, so you can inspect it from the browser console: open the console, type stream and press Return. (To view the console in Chrome, press Ctrl-Shift-J, or Command-Option-J if you're on a Mac.) - What does
localStream.getVideoTracks()
return? - Try calling
localStream.getVideoTracks()[0].stop()
. - Look at the constraints object: what happens when you change it to
{audio: true, video: true}
? - What size is the video element? How can you get the video's natural size from JavaScript, as opposed to display size? Use the Chrome Dev Tools to check.
- Try adding CSS filters to the video element. For example:
- Try adding SVG filters. For example:
What you learned
In this step you learned how to:
- Get video from your webcam.
- Set media constraints.
- Mess with the video element.
A complete version of this step is in the step-01 folder.
Tips
- Don't forget the
autoplay
attribute on thevideo
element. Without that, you'll only see a single frame! - There are lots more options for
getUserMedia()
constraints. Take a look at the demo at webrtc.github.io/samples/src/content/peerconnection/constraints. As you'll see, there are lots of interesting WebRTC samples on that site.
Best practice
- Make sure your video element doesn't overflow its container. We've added
width
andmax-width
to set a preferred size and a maximum size for the video. The browser will calculate the height automatically:
Next up
You've got video, but how do you stream it? Find out in the next step!
What you'll learn
In this step you'll find out how to:
- Abstract away browser differences with the WebRTC shim, adapter.js.
- Use the RTCPeerConnection API to stream video.
- Control media capture and streaming.
A complete version of this step is in the step-2 folder.
What is RTCPeerConnection?
RTCPeerConnection is an API for making WebRTC calls to stream video and audio, and exchange data.
This example sets up a connection between two RTCPeerConnection objects (known as peers) on the same page.
Not much practical use, but good for understanding how RTCPeerConnection works.
Add video elements and control buttons
In index.html replace the single video element with two video elements and three buttons:
One video element will display the stream from
getUserMedia()
and the other will show the same video streamed via RTCPeerconnection. (In a real world application, one video element would display the local stream and the other the remote stream.)
Add the adapter.js shim
Add a link to the current version of adapter.js above the link to main.js:
adapter.js is a shim to insulate apps from spec changes and prefix differences. (Though in fact, the standards and protocols used for WebRTC implementations are highly stable, and there are only a few prefixed names.)
In this step, we've linked to the most recent version of adapter.js, which is fine for a codelab but not may not be right for a production app**.** The adapter.js GitHub repo explains techniques for making sure your app always accesses the most recent version.
For full information about WebRTC interop, see webrtc.org/web-apis/interop.
Index.html should now look like this:
Install the RTCPeerConnection code
Replace main.js with the version in the step-02 folder.
It's not ideal doing cut-and-paste with large chunks of code in a codelab, but in order to get RTCPeerConnection up and running, there's no alternative but to go the whole hog.
You'll learn how the code works in a moment.
Make the call
Open index.html, click the Start button to get video from your webcam, and click Call to make the peer connection. You should see the same video (from your webcam) in both video elements. View the browser console to see WebRTC logging.
How it works
This step does a lot..
If you want to skip the explanation below, that's fine.
You can still continue with the codelab!
WebRTC uses the RTCPeerConnection API to set up a connection to stream video between WebRTC clients, known as peers.
In this example, the two RTCPeerConnection objects are on the same page:
pc1
and pc2
. Not much practical use, but good for demonstrating how the APIs work.
https://kbbrown946.weebly.com/allow-apps-to-play-on-mac.html. Setting up a call between WebRTC peers involves three tasks:
- Create a RTCPeerConnection for each end of the call and, at each end, add the local stream from
getUserMedia()
. - Get and share network information: potential connection endpoints are known as ICE candidates.
- Get and share local and remote descriptions: metadata about local media in SDP format.
Imagine that Alice and Bob want to use RTCPeerConnection to set up a video chat.
First up, Alice and Bob exchange network information. The expression ‘finding candidates' refers to the process of finding network interfaces and ports using the ICE framework.
- Alice creates an RTCPeerConnection object with an
onicecandidate (addEventListener('icecandidate'))
handler. This corresponds to the following code from main.js:
The
servers
argument to RTCPeerConnection isn't used in this example.
This is where you could specify STUN and TURN servers.
WebRTC is designed to work peer-to-peer, so users can connect by the most direct route possible. However, WebRTC is built to cope with real-world networking: client applications need to traverse NAT gateways and firewalls, and peer to peer networking needs fallbacks in case direct connection fails.
As part of this process, the WebRTC APIs use STUN servers to get the IP address of your computer, and TURN servers to function as relay servers in case peer-to-peer communication fails. WebRTC in the real world explains in more detail.
- Alice calls
getUserMedia()
and adds the stream passed to that:
- The
onicecandidate
handler from step 1. is called when network candidates become available. - Alice sends serialized candidate data to Bob. In a real application, this process (known as signaling) takes place via a messaging service – you'll learn how to do that in a later step. Of course, in this step, the two RTCPeerConnection objects are on the same page and can communicate directly with no need for external messaging.
- When Bob gets a candidate message from Alice, he calls
addIceCandidate()
, to add the candidate to the remote peer description:
WebRTC peers also need to find out and exchange local and remote audio and video media information, such as resolution and codec capabilities. Signaling to exchange media configuration information proceeds by exchanging blobs of metadata, known as an offer and an answer, using the Session Description Protocol format, known as SDP:
- Alice runs the RTCPeerConnection
createOffer()
method. The promise returned provides an RTCSessionDescription: Alice's local session description:
- If successful, Alice sets the local description using
setLocalDescription()
and then sends this session description to Bob via their signaling channel. - Bob sets the description Alice sent him as the remote description using
setRemoteDescription()
. - Bob runs the RTCPeerConnection
createAnswer()
method, passing it the remote description he got from Alice, so a local session can be generated that is compatible with hers. ThecreateAnswer()
promise passes on an RTCSessionDescription: Bob sets that as the local description and sends it to Alice. - When Alice gets Bob's session description, she sets that as the remote description with
setRemoteDescription()
.
- Ping!
Bonus points
- Take a look at chrome://webrtc-internals. This provides WebRTC stats and debugging data. (A full list of Chrome URLs is at chrome://about.)
- Style the page with CSS:
- Put the videos side by side.
- Make the buttons the same width, with bigger text.
- Make sure the layout works on mobile.
- From the Chrome Dev Tools console, look at
localStream
,localPeerConnection
andremotePeerConnection
. - From the console, look at
localPeerConnectionpc1.localDescription
. What does SDP format look like?
What you learned
In this step you learned how to:
- Abstract away browser differences with the WebRTC shim, adapter.js.
- Use the RTCPeerConnection API to stream video.
- Control media capture and streaming.
- Share media and network information between peers to enable a WebRTC call.
A complete version of this step is in the step-2 folder.
Tips
- There's a lot to learn in this step! To find other resources that explain RTCPeerConnection in more detail, take a look at webrtc.org/start. This page includes suggestions for JavaScript frameworks — if you'd like to use WebRTC, but don't want to wrangle the APIs.
- Find out more about the adapter.js shim from the adapter.js GitHub repo.
- Want to see what the world's best video chat app looks like? Take a look at AppRTC, the WebRTC project's canonical app for WebRTC calls: app, code. Call setup time is less than 500 ms.
Best practice
- To future-proof your code, use the new Promise-based APIs and enable compatibility with browsers that don't support them by using adapter.js.
Next up
This step shows how to use WebRTC to stream video between peers — but this codelab is also about data!
In the next step find out how to stream arbitrary data using RTCDataChannel.
What you'll learn
- How to exchange data between WebRTC endpoints (peers).
A complete version of this step is in the step-03 folder.
Update your HTML
For this step, you'll use WebRTC data channels to send text between two
textarea
elements on the same page. That's not very useful, but does demonstrate how WebRTC can be used to share data as well as streaming video.
Remove the video and button elements from index.html and replace them with the following HTML:
One textarea will be for entering text, the other will display the text as streamed between peers.
index.html should now look like this:
Update your JavaScript
Replace main.js with the contents of step-03/js/main.js.
As with the previous step, it's not ideal doing cut-and-paste with large chunks of code in a codelab, but (as with RTCPeerConnection) there's no alternative.
Try out streaming data between peers: open index.html, press Start to set up the peer connection, enter some text in the
textarea
on the left, then click Send to transfer the text using WebRTC data channels.
How it works
This code uses RTCPeerConnection and RTCDataChannel to enable exchange of text messages.
Much of the code in this step is the same as for the RTCPeerConnection example.
The
sendData()
and createConnection()
functions have most of the new code:
The syntax of RTCDataChannel is deliberately similar to WebSocket, with a
send()
method and a message
event.
Notice the use of
dataConstraint
. Data channels can be configured to enable different types of data sharing — for example, prioritizing reliable delivery over performance. You can find out more information about options at Mozilla Developer Network.
Three types of constraints
It's confusing!
Different types of WebRTC call setup options are all often referred to as ‘constraints'.
Find out more about constraints and options:
Bonus points
- With SCTP, the protocol used by WebRTC data channels, reliable and ordered data delivery is on by default. When might RTCDataChannel need to provide reliable delivery of data, and when might performance be more important — even if that means losing some data?
- Use CSS to improve page layout, and add a placeholder attribute to the 'dataChannelReceive' textarea.
- Test the page on a mobile device.
What you learned
In this step you learned how to:
- Establish a connection between two WebRTC peers.
- Exchange text data between the peers.
A complete version of this step is in the step-03 folder.
Find out more
- WebRTC data channels (a couple of years old, but still worth reading)
Next up
You've learned how to exchange data between peers on the same page, but how do you do this between different machines? First, you need to set up a signaling channel to exchange metadata messages. Find out how in the next step!
What you'll learn
In this step, you'll find out how to:
- Use
npm
to install project dependencies as specified in package.json - Run a Node.js server and use node-static to serve static files.
- Set up a messaging service on Node.js using Socket.IO.
- Use that to create ‘rooms' and exchange messages.
A complete version of this step is in the step-04 folder.
Concepts
In order to set up and maintain a WebRTC call, WebRTC clients (peers) need to exchange metadata:
- Candidate (network) information.
- Offer and answer messages providing information about media, such as resolution and codecs.
In other words, an exchange of metadata is required before peer-to-peer streaming of audio, video, or data can take place. This process is called signaling.
Open Mac App Via An Js Function Example
In the previous steps, the sender and receiver RTCPeerConnection objects are on the same page, so ‘signaling' is simply a matter of passing metadata between objects.
In a real world application, the sender and receiver RTCPeerConnections run in web pages on different devices, and you need a way for them to communicate metadata.
For this, you use a signaling server: a server that can pass messages between WebRTC clients (peers). The actual messages are plain text: stringified JavaScript objects.
Prerequisite: Install Node.js
In order to run the next steps of this codelab (folders step-04 to step-06) you will need to run a server on localhost using Node.js.
You can download and install Node.js from this link or via your preferred package manager.
Once installed, you will be able to import the dependencies required for the next steps (running
npm install
), as well as running a small localhost server to execute the codelab (running node index.js
). These commands will be indicated later, when they are required.
About the app
WebRTC uses a client-side JavaScript API, but for real-world usage also requires a signaling (messaging) server, as well as STUN and TURN servers. You can find out more here.
In this step you'll build a simple Node.js signaling server, using the Socket.IO Node.js module and JavaScript library for messaging. Experience with Node.js and Socket.IO will be useful, but not crucial; the messaging components are very simple.
Choosing the right signaling server
This codelab uses Socket.IO for a signaling server.
The design of Socket.IO makes it straightforward to build a service to exchange messages, and Socket.IO is suited to learning about WebRTC signaling because of its built-in concept of ‘rooms'.
However, for a production service, there are better alternatives. See How to Select a Signaling Protocol for Your Next WebRTC Project. Best personal finance apps for mac and iphone.
In this example, the server (the Node.js application) is implemented in index.js, and the client that runs on it (the web app) is implemented in index.html.
Open Mac App Via An Js Function Key
The Node.js application in this step has two tasks.
First, it acts as a message relay:
Second, it manages WebRTC video chat ‘rooms':
Our simple WebRTC application will permit a maximum of two peers to share a room.
HTML & JavaScript
Update index.html so it looks like this:
You won't see anything on the page in this step: all logging is done to the browser console. (To view the console in Chrome, press Ctrl-Shift-J, or Command-Option-J if you're on a Mac.)
Replace js/main.js with the following:
Set up Socket.IO to run on Node.js
In the HTML file, you may have seen that you are using a Socket.IO file:
At the top level of your work directory create a file named package.json with the following contents:
This is an app manifest that tells Node Package Manager (
npm
) what project dependencies to install.
To install dependencies (such as
/socket.io/socket.io.js
), run the following from the command line terminal, in your work directory:
You should see an installation log that ends something like this:
As you can see,
npm
has installed the dependencies defined in package.json.
Create a new file index.js at the top level of your work directory (not in the js directory) and add the following code:
From the command line terminal, run the following command in the work directory:
From your browser, open localhost:8080.
Each time you open this URL, you will be prompted to enter a room name. To join the same room, choose the same room name each time, such as ‘foo'.
Open a new tab page, and open localhost:8080 again. Choose the same room name.
Open localhost:8080 in a third tab or window. Choose the same room name again.
Open Mac App Via An Js Function Allows
Check the console in each of the tabs: you should see the logging from the JavaScript above.
Bonus points
- What alternative messaging mechanisms might be possible? What problems might you encounter using ‘pure' WebSocket?
- What issues might be involved with scaling this application? Can you develop a method for testing thousands or millions of simultaneous room requests?
- This app uses a JavaScript prompt to get a room name. Work out a way to get the room name from the URL. For example localhost:8080/foo would give the room name
foo
.
What you learned
In this step, you learned how to:
Open Mac App Via An Js Function Diagram
- Use npm to install project dependencies as specified in package.json
- Run a Node.js server to server static files.
- Set up a messaging service on Node.js using socket.io.
- Use that to create ‘rooms' and exchange messages.
A complete version of this step is in the step-04 folder.
Find out more
Next up
Find out how to use signaling to enable two users to make a peer connection.
What you'll learn
In this step you'll find out how to:
- Run a WebRTC signaling service using Socket.IO running on Node.js
- Use that service to exchange WebRTC metadata between peers.
A complete version of this step is in the step-05 folder.
Replace HTML and JavaScript
Replace the contents of index.html with the following:
Replace js/main.js with the contents of step-05/js/main.js.
Run the Node.js server
If you are not following this codelab from your work directory, you may need to install the dependencies for the step-05 folder or your current working folder. Run the following command from your working directory:
Node Js App Use
Once installed, if your Node.js server is not running, start it by calling the following command in the work directory:
Make sure you're using the version of index.js from the previous step that implements Socket.IO. For more information on Node and Socket IO, review the section 'Set up a signaling service to exchange messages'.
From your browser, open localhost:8080.
Open localhost:8080 again, in a new tab or window. One video element will display the local stream from
getUserMedia()
and the other will show the ‘remote' video streamed via RTCPeerconnection.
You'll need to restart your Node.js server each time you close a client tab or window.
View logging in the browser console.
Bonus points
- This application supports only one-to-one video chat. How might you change the design to enable more than one person to share the same video chat room?
- The example has the room name foo hard coded. What would be the best way to enable other room names?
- How would users share the room name? Try to build an alternative to sharing room names.
- How could you change the app
What you learned
In this step you learned how to:
- Run a WebRTC signaling service using Socket.IO running on Node.js.
- Use that service to exchange WebRTC metadata between peers.
A complete version of this step is in the step-05 folder.
Tips
- WebRTC stats and debug data are available from chrome://webrtc-internals.
- test.webrtc.org can be used to check your local environment and test your camera and microphone.
- If you have odd troubles with caching, try the following:
- Do a hard refresh by holding down ctrl and clicking the Reload button
- Restart the browser
- Run
npm cache clean
from the command line.
Next up
Find out how to take a photo, get the image data, and share that between remote peers.
What you'll learn
In this step you'll learn how to:
- Take a photo and get the data from it using the canvas element.
- Exchange image data with a remote user.
A complete version of this step is in the step-06 folder.
How it works
Previously you learned how to exchange text messages using RTCDataChannel.
This step makes it possible to share entire files: in this example, photos captured via
getUserMedia()
.
The core parts of this step are as follows:
- Establish a data channel. Note that you don't add any media streams to the peer connection in this step.
- Capture the user's webcam video stream with
getUserMedia()
:
- When the user clicks the Snap button, get a snapshot (a video frame) from the video stream and display it in a
canvas
element:
- When the user clicks the Send button, convert the image to bytes and send them via a data channel:
- The receiving side converts data channel message bytes back to an image and displays the image to the user:
Get the code
Replace the contents of your work folder with the contents of step-06. Your index.html file in work should now look like this**:**
If you are not following this codelab from your work directory, you may need to install the dependencies for the step-06 folder or your current working folder. Simply run the following command from your working directory:
Once installed, if your Node.js server is not running, start it by calling the following command from your work directory:
Make sure you're using the version of index.js that implements Socket.IO, and remember to restart your Node.js server if you make changes. For more information on Node and Socket IO, review the section 'Set up a signaling service to exchange messages'.
If necessary, click on the Allow button to allow the app to use your webcam.
The app will create a random room ID and add that ID to the URL. Open the URL from the address bar in a new browser tab or window.
Click the Snap & Send button and then look at the Incoming area in the other tab at the bottom of the page. The app transfers photos between tabs.
You should see something like this:
Bonus points
- How can you change the code to make it possible to share any file type?
Find out more
- The MediaStream Image Capture API: an API for taking photographs and controlling cameras — coming soon to a browser near you!
- The MediaRecorder API, for recording audio and video: demo, documentation.
What you learned
- How to take a photo and get the data from it using the canvas element.
- How to exchange that data with a remote user.
A complete version of this step is in the step-06 folder.
You built an app to do realtime video streaming and data exchange!
What you learned
In this codelab you learned how to:
- Get video from your webcam.
- Stream video with RTCPeerConnection.
- Stream data with RTCDataChannel.
- Set up a signaling service to exchange messages.
- Combine peer connection and signaling.
- Take a photo and share it via a data channel.
Next steps
- Look at the code and architecture for the canonical WebRTC chat application AppRTC: app, code.
- Try out the live demos from github.com/webrtc/samples.
Learn more
Run Node Js App
- A range of resources for getting started with WebRTC are available from webrtc.org/start.