JS - Tools - CL2 Flashcards

1
Q

npm:

  • structure of package.json
  • careate own npm
  • publish own npm
  • Running scripts
  • Scripts structure
A

package.json

You can add a package.json file to your package to make it easy for others to manage and install. Packages published to the registry must contain a package.json file.
A package.json file:
- lists the packages your project depends on
- specifies versions of a package that your project can use using semantic versioning rules
- makes your build reproducible, and therefore easier to share with other developers
Note: To make your package easier to find on the npm website, we recommend including a custom description in your package.json file.

package.json fields
Required name and version fields
A package.json file must contain "name" and "version" fields.
The "name" field contains your package’s name, and must be lowercase and one word, and may contain hyphens and underscores.
The "version" field must be in the form x.x.x and follow the semantic versioning guidelines.
Author field
If you want to include package author information in "author" field, use the following format (email and website are both optional):
Your Name  (http://example.com)

You can create a package.json file by running a CLI questionnaire or creating a default package.json file.
Running a CLI questionnaire
To create a package.json file with values that you supply, use the npm init command.
On the command line, navigate to the root directory of your package.
cd /path/to/package
Run the following command:
npm init
Answer the questions in the command line questionnaire.

Creating a default package.json file
To create a default package.json using information extracted from the current directory, use the npm init command with the –yes or -y flag. For a list of default values, see “Default values extracted from the current directory”.
On the command line, navigate to the root directory of your package.
cd /path/to/package
Run the following command:
npm init –yes

Default values extracted from the current directory§

  • name: the current directory name
  • version: always 1.0.0
  • description: info from the README, or an empty string “”
  • main: always index.js
  • scripts: by default creates an empty test script
  • keywords: empty
  • author: empty
  • license: ISC
  • bugs: information from the current directory, if present
  • homepage: information from the current directory, if present

Setting config options for the init command
You can set default config options for the init command. For example, to set the default author email, author name, and license, on the command line, run the following commands:
> npm set init.author.email “example-user@example.com”
> npm set init.author.name “example_user”
> npm set init.license “MIT”

Options

name
If you plan to publish your package, the most important things in your package.json are the name and version fields as they will be required. The name and version together form an identifier that is assumed to be completely unique. Changes to the package should come along with changes to the version. If you don’t plan to publish your package, the name and version fields are optional.
The name is what your thing is called.
Some rules:
- The name must be less than or equal to 214 characters. This includes the scope for scoped packages.
- The name can’t start with a dot or an underscore.
- New packages must not have uppercase letters in the name.
- The name ends up being part of a URL, an argument on the command line, and a folder name. Therefore, the name can’t contain any non-URL-safe characters.
Some tips:
- Don’t use the same name as a core Node module.
- Don’t put “js” or “node” in the name. It’s assumed that it’s js, since you’re writing a package.json file, and you can specify the engine using the “engines” field. (See below.)
- The name will probably be passed as an argument to require(), so it should be something short, but also reasonably descriptive.
- You may want to check the npm registry to see if there’s something by that name already, before you get too attached to it. https://www.npmjs.com/

version
Version must be parseable by node-semver, which is bundled with npm as a dependency. (npm install semver to use it yourself.)

description
Put a description in it. It’s a string. This helps people discover your package, as it’s listed in npm search.

keywords
Put keywords in it. It’s an array of strings. This helps people discover your package as it’s listed in npm search.

homepage
The url to the project homepage.
Example:
“homepage”: “https://github.com/owner/project#readme”

bugs
The url to your project’s issue tracker and / or the email address to which issues should be reported. These are helpful for people who encounter issues with your package.
It should look like this:
{ “url” : “https://github.com/owner/project/issues”
, “email” : “project@hostname.com”
}
You can specify either one or both values. If you want to provide only a url, you can specify the value for “bugs” as a simple string instead of an object.
If a url is provided, it will be used by the npm bugs command.

license
You should specify a license for your package so that people know how they are permitted to use it, and any restrictions you’re placing on it

people fields: author, contributors§
The “author” is one person. “contributors” is an array of people. A “person” is an object with a “name” field and optionally “url” and “email”, like this:
{ “name” : “Barney Rubble”
, “email” : “b@rubble.com”
, “url” : “http://barnyrubble.tumblr.com/”
}
Or you can shorten that all into a single string, and npm will parse it for you:
“Barney Rubble <b> (http://barnyrubble.tumblr.com/)”
Both email and url are optional either way.
npm also sets a top-level “maintainers” field with your npm
user info.</b>

files
The optional files field is an array of file patterns that describes the entries to be included when your package is installed as a dependency. File patterns follow a similar syntax to .gitignore, but reversed: including a file, directory, or glob pattern (, **/, and such) will make it so that file is included in the tarball when it’s packed. Omitting the field will make it default to [“”], which means it will include all files.
Some special files and directories are also included or excluded regardless of whether they exist in the files array (see below).
You can also provide a .npmignore file in the root of your package or in subdirectories, which will keep files from being included. At the root of your package it will not override the “files” field, but in subdirectories it will. The .npmignore file works just like a .gitignore. If there is a .gitignore file, and .npmignore is missing, .gitignore’s contents will be used instead.
Files included with the “package.json#files” field cannot be excluded through .npmignore or .gitignore.
Certain files are always included, regardless of settings:
- package.json
- README
- CHANGES / CHANGELOG / HISTORY
- LICENSE / LICENCE
- NOTICE
- The file in the “main” field
README, CHANGES, LICENSE & NOTICE can have any case and extension.
Conversely, some files are always ignored:
.git, CVS, .svn, .hg, .lock-wscript, .wafpickle-N, .
.swp, .DS_Store, ._*, npm-debug.log, .npmrc, node_modules, config.gypi, *.orig, package-lock.json (use shrinkwrap instead)

main
The main field is a module ID that is the primary entry point to your program. That is, if your package is named foo, and a user installs it, and then does require(“foo”), then your main module’s exports object will be returned.
This should be a module ID relative to the root of your package folder.
For most modules, it makes the most sense to have a main script and often not much else.

browser
If your module is meant to be used client-side the browser field should be used instead of the main field. This is helpful to hint users that it might rely on primitives that aren’t available in Node.js modules. (e.g. window)

bin
A lot of packages have one or more executable files that they’d like to install into the PATH. npm makes this pretty easy (in fact, it uses this feature to install the “npm” executable.)
To use this, supply a bin field in your package.json which is a map of command name to local file name. On install, npm will symlink that file into prefix/bin for global installs, or ./node_modules/.bin/ for local installs.
For example, myapp could have this:
{ “bin” : { “myapp” : “./cli.js” } }
So, when you install myapp, it’ll create a symlink from the cli.js script to /usr/local/bin/myapp.

repository
Specify the place where your code lives. This is helpful for people who want to contribute. If the git repo is on GitHub, then the npm docs command will be able to find you.
Do it like this:
“repository”: {
“type” : “git”,
“url” : “https://github.com/npm/cli.git”
}
“repository”: {
“type” : “svn”,
“url” : “https://v8.googlecode.com/svn/trunk/”
}
The URL should be a publicly available (perhaps read-only) url that can be handed directly to a VCS program without any modification. It should not be a url to an html project page that you put in your browser. It’s for computers.
For GitHub, GitHub gist, Bitbucket, or GitLab repositories you can use the same shortcut syntax you use for npm install:
“repository”: “npm/npm”
“repository”: “github:user/repo”
“repository”: “gist:11081aaa281”
“repository”: “bitbucket:user/repo”
“repository”: “gitlab:user/repo”
If the package.json for your package is not in the root directory (for example if it is part of a monorepo), you can specify the directory in which it lives:
“repository”: {
“type” : “git”,
“url” : “https://github.com/facebook/react.git”,
“directory”: “packages/react-dom”
}

scripts
The “scripts” property is a dictionary containing script commands that are run at various times in the lifecycle of your package. The key is the lifecycle event, and the value is the command to run at that point.

config
A “config” object can be used to set configuration parameters used in package scripts that persist across upgrades. For instance, if a package had the following:
{ “name” : “foo”
, “config” : { “port” : “8080” } }
and then had a “start” command that then referenced the npm_package_config_port environment variable, then the user could override that by doing npm config set foo:port 8001.

dependencies
Dependencies are specified in a simple object that maps a package name to a version range. The version range is a string which has one or more space-separated descriptors. Dependencies can also be identified with a tarball or git URL.
Please do not put test harnesses or transpilers in your dependencies object. See devDependencies, below.
See semver for more details about specifying version ranges.
- version - Must match version exactly
- >version - Must be greater than version
- >=version - etc
- =version1 <=version2.
- range1 || range2 - Passes if either range1 or range2 are satisfied.
- git… - See ‘Git URLs as Dependencies’ below
- user/repo - See ‘GitHub URLs’ below
- tag - A specific version tagged and published as tag
- path/path/path - See Local Paths below

devDependencies
If someone is planning on downloading and using your module in their program, then they probably don’t want or need to download and build the external test or documentation framework that you use.
In this case, it’s best to map these additional items in a devDependencies object.
These things will be installed when doing npm link or npm install from the root of a package, and can be managed like any other npm configuration param. See npm-config for more on the topic.
peerDependencies
In some cases, you want to express the compatibility of your package with a host tool or library, while not necessarily doing a require of this host. This is usually referred to as a plugin. Notably, your module may be exposing a specific interface, expected and specified by the host documentation.

bundledDependencies
This defines an array of package names that will be bundled when publishing the package.
In cases where you need to preserve npm packages locally or have them available through a single file download, you can bundle the packages in a tarball file by specifying the package names in the bundledDependencies array and executing npm pack.

optionalDependencies
If a dependency can be used, but you would like npm to proceed if it cannot be found or fails to install, then you may put it in the optionalDependencies object. This is a map of package name to version or url, just like the dependencies object. The difference is that build failures do not cause installation to fail.

engines
You can specify the version of node that your stuff works on:
{ “engines” : { “node” : “>=0.10.3 <0.12” } }
And, like with dependencies, if you don’t specify the version (or if you specify “*” as the version), then any version of node will do.
If you specify an “engines” field, then npm will require that “node” be somewhere on that list. If “engines” is omitted, then npm will just assume that it works on node.
You can also use the “engines” field to specify which versions of npm are capable of properly installing your program. For example:
{ “engines” : { “npm” : “~1.0.20” } }
Unless the user has set the engine-strict config flag, this field is advisory only and will only produce warnings when your package is installed as a dependency.

engineStrict
This feature was removed in npm 3.0.0
Prior to npm 3.0.0, this feature was used to treat this package as if the user had set engine-strict. It is no longer used.

os
You can specify which operating systems your module will run on:
“os” : [ “darwin”, “linux” ]
You can also blacklist instead of whitelist operating systems, just prepend the blacklisted os with a ‘!’:
“os” : [ “!win32” ]
The host operating system is determined by process.platform
It is allowed to both blacklist, and whitelist, although there isn’t any good reason to do this.

cpu
If your code only runs on certain cpu architectures, you can specify which ones.
“cpu” : [ “x64”, “ia32” ]
Like the os option, you can also blacklist architectures:
“cpu” : [ “!arm”, “!mips” ]
The host architecture is determined by process.arch

private
If you set “private”: true in your package.json, then npm will refuse to publish it.
This is a way to prevent accidental publication of private repositories. If you would like to ensure that a given package is only ever published to a specific registry (for example, an internal registry), then use the publishConfig dictionary described below to override the registry config param at publish-time.

publishConfig
This is a set of config values that will be used at publish-time. It’s especially handy if you want to set the tag, registry or access, so that you can ensure that a given package is not tagged with “latest”, published to the global public registry or that a scoped module is private by default.
Any config values can be overridden, but only “tag”, “registry” and “access” probably matter for the purposes of publishing.

DEFAULT VALUES
npm will default some values based on package contents.
- “scripts”: {“start”: “node server.js”}
If there is a server.js file in the root of your package, then npm will default the start command to node server.js.
- “scripts”:{“install”: “node-gyp rebuild”}
If there is a binding.gyp file in the root of your package and you have not defined an install or preinstall script, npm will default the install command to compile using node-gyp.
- “contributors”: […]
If there is an AUTHORS file in the root of your package, npm will treat each line as a Name (url) format, where email and url are optional. Lines which start with a # or are blank, will be ignored.

Creating and publishing unscoped public packages
As an npm user, you can create unscoped packages to use in your own projects and publish them to the npm public registry for others to use in theirs. Unscoped packages are always public and are referred to by the package name only:
package-name
Note: Before you can publish public unscoped npm packages, you must sign up for an npm user account.
Creating an unscoped public package
1. On the command line, create a directory for your package:
mkdir my-test-package
2. Navigate to the root directory of your package:
cd my-test-package
3. If you are using git to manage your package code, in the package root directory, run the following commands, replacing git-remote-url with the git remote URL for your package:
git init
git remote add origin git://git-remote-url
4. In the package root directory, run the npm init command.
5. Respond to the prompts to generate a package.json file. For help naming your package, see “Package name guidelines”.
6. Create a README file that explains what your package code is and how to use it.
7. In your preferred text editor, write the code for your package.

Publishing sensitive information to the registry can harm your users, compromise your development infrastructure, be expensive to fix, and put you at risk of legal action. We strongly recommend removing sensitive information, such as private keys, passwords, personally identifiable information (PII), and credit card data before publishing your package to the registry.
For less sensitive information, such as testing data, use a .npmignore or .gitignore file to prevent publishing to the registry.

Publishing unscoped public packages
1. On the command line, navigate to the root directory of your package.
cd /path/to/package
2. To publish your public package to the npm registry, run:
npm publish
3. To see your public package page, visit https://npmjs.com/package/package-name, replacing package-name with the name of your package. Public packages will say public below the package name on the npm website.

Creating and publishing scoped public packages
To share your code publicly in a user or Org namespace, you can publish public user-scoped or Org-scoped packages to the npm registry.
Note: Before you can publish user-scoped npm packages, you must sign up for an npm user account. Additionally, to publish Org-scoped packages, you must create an npm user account, then create an npm Org.

Creating a scoped public package
1. If you are using npmrc to manage accounts on multiple registries, on the command line, switch to the appropriate profile:
npmrc
2. On the command line, create a directory for your package:
mkdir my-test-package
3. Navigate to the root directory of your package:
cd my-test-package
4. If you are using git to manage your package code, in the package root directory, run the following commands, replacing git-remote-url with the git remote URL for your package:
git init
git remote add origin git://git-remote-url
5. In the package root directory, run the npm init command and pass the scope to the scope flag:
- For an Org-scoped package, replace my-org with the name of your Org:
npm init –scope=@my-org
- For a user-scoped package, replace my-username with your username:
npm init –scope=@my-username
6. Respond to the prompts to generate a package.json file. For help naming your package, see “Package name guidelines”.
7. Create a README file that explains what your package code is and how to use it.
8. In your preferred text editor, write the code for your package.

Publishing scoped public packages
By default, scoped packages are published with private visibility. To publish a scoped package with public visibility, use npm publish –access public.
1. On the command line, navigate to the root directory of your package.
cd /path/to/package
2. To publish your scoped public package to the npm registry, run:
npm publish –access public
3. To see your public package page, visit https://npmjs.com/package/package-name, replacing package-name with the name of your package. Public packages will say public below the package name on the npm website.

Publishing private packages
By default, scoped packages are published with private visibility.
1. On the command line, navigate to the root directory of your package.
cd /path/to/package
2. To publish your private package to the npm registry, run:
npm publish
3. To see your private package page, visit https://npmjs.com/package/package-name, replacing package-name with the name of your package. Private packages will say private below the package name on the npm website.

npm scripts
npm supports the “scripts” property of the package.json file, for the following scripts:
- prepublish: Run BEFORE the package is packed and published, as well as on local npm install without any arguments. (See below)
- prepare: Run both BEFORE the package is packed and published, on local npm install without any arguments, and when installing git dependencies (See below). This is run AFTER prepublish, but BEFORE prepublishOnly.
- prepublishOnly: Run BEFORE the package is prepared and packed, ONLY on npm publish. (See below.)
- prepack: run BEFORE a tarball is packed (on npm pack, npm publish, and when installing git dependencies)
- postpack: Run AFTER the tarball has been generated and moved to its final destination.
- publish, postpublish: Run AFTER the package is published.
- preinstall: Run BEFORE the package is installed
- install, postinstall: Run AFTER the package is installed.
- preuninstall, uninstall: Run BEFORE the package is uninstalled.
- postuninstall: Run AFTER the package is uninstalled.
- preversion: Run BEFORE bumping the package version.
- version: Run AFTER bumping the package version, but BEFORE commit.
- postversion: Run AFTER bumping the package version, and AFTER commit.
- pretest, test, posttest: Run by the npm test command.
- prestop, stop, poststop: Run by the npm stop command.
- prestart, start, poststart: Run by the npm start command.
- prerestart, restart, postrestart: Run by the npm restart command. Note: npm restart will run the stop and start scripts if no restart script is provided.
- preshrinkwrap, shrinkwrap, postshrinkwrap: Run by the npm shrinkwrap command.

Additionally, arbitrary scripts can be executed by running npm run-script . Pre and post commands with matching names will be run for those as well (e.g. premyscript, myscript, postmyscript). Scripts from dependencies can be run with npm explore – npm run .

USE CASES
If you need to perform operations on your package before it is used, in a way that is not dependent on the operating system or architecture of the target system, use a prepublish script. This includes tasks such as:
- Compiling CoffeeScript source code into JavaScript.
- Creating minified versions of JavaScript source code.
- Fetching remote resources that your package will use.
The advantage of doing these things at prepublish time is that they can be done once, in a single place, thus reducing complexity and variability. Additionally, this means that:
- You can depend on coffee-script as a devDependency, and thus your users don’t need to have it installed.
- You don’t need to include minifiers in your package, reducing the size for your users.
- You don’t need to rely on your users having curl or wget or other system tools on the target machines.

EXITING
Scripts are run by passing the line as a script argument to sh.
If the script exits with a code other than 0, then this will abort the process.
Note that these script files don’t have to be nodejs or even javascript programs. They just have to be some kind of executable file.

HOOK SCRIPTS
If you want to run a specific script at a specific lifecycle event for ALL packages, then you can use a hook script.
Place an executable file at node_modules/.hooks/{eventname}, and it’ll get run for all packages when they are going through that point in the package lifecycle for any packages installed in that root.
Hook scripts are run exactly the same way as package.json scripts. That is, they are in a separate child process, with the env described above.

BEST PRACTICES§

  • Don’t exit with a non-zero error code unless you really mean it. Except for uninstall scripts, this will cause the npm action to fail, and potentially be rolled back. If the failure is minor or only will prevent some optional features, then it’s better to just print a warning and exit successfully.
  • Try not to use scripts to do what npm can do for you. Read through package.json to see all the things that you can specify and enable by simply describing your package appropriately. In general, this will lead to a more robust and consistent state.
  • Inspect the env to determine where to put things. For instance, if the npm_config_binroot environment variable is set to /home/user/bin, then don’t try to install executables into /usr/local/bin. The user probably set it up that way for a reason.
  • Don’t prefix your script commands with “sudo”. If root permissions are required for some reason, then it’ll fail with that error, and the user will sudo the npm command in question.
  • Don’t use install. Use a .gyp file for compilation, and prepublish for anything else. You should almost never have to explicitly set a preinstall or install script. If you are doing this, please consider if there is another option. The only valid use of install or preinstall scripts is for compilation which must be done on the target architecture.

npm-run-script
npm run-script [–silent] [– …]
alias: npm run
This runs an arbitrary command from a package’s “scripts” object. If no “command” is provided, it will list the available scripts. run[-script] is used by the test, start, restart, and stop commands, but can be called directly, as well. When the scripts in the package are printed out, they’re separated into lifecycle (test, start, restart) and directly-run scripts.
As of npm@2.0.0, you can use custom arguments when executing scripts. The special option – is used by getopt to delimit the end of the options. npm will pass all the arguments after the – directly to your script:
npm run test – –grep=”pattern”
The arguments will only be passed to the script specified after npm run and not to any pre or post script.
The env script is a special built-in command that can be used to list environment variables that will be available to the script at runtime. If an “env” command is defined in your package, it will take precedence over the built-in.
In addition to the shell’s pre-existing PATH, npm run adds node_modules/.bin to the PATH provided to scripts. Any binaries provided by locally-installed dependencies can be used without the node_modules/.bin prefix. For example, if there is a devDependency on tap in your package, you should write:
“scripts”: {“test”: “tap test/*.js”}
instead of
“scripts”: {“test”: “node_modules/.bin/tap test/*.js”}
to run your tests.
The actual shell your script is run within is platform dependent. By default, on Unix-like systems it is the /bin/sh command, on Windows it is the cmd.exe. The actual shell referred to by /bin/sh also depends on the system. As of npm@5.1.0 you can customize the shell with the script-shell configuration.
Scripts are run from the root of the module, regardless of what your current working directory is when you call npm run. If you want your script to use different behavior based on what subdirectory you’re in, you can use the INIT_CWD environment variable, which holds the full path you were in when you ran npm run.
npm run sets the NODE environment variable to the node executable with which npm is executed. Also, if the –scripts-prepend-node-path is passed, the directory within which node resides is added to the PATH. If –scripts-prepend-node-path=auto is passed (which has been the default in npm v3), this is only performed when that node executable is not found in the PATH.
If you try to run a script without having a node_modules directory and it fails, you will be given a warning to run npm install, just in case you’ve forgotten.
You can use the –silent flag to prevent showing npm ERR! output on error.
You can use the –if-present flag to avoid exiting with a non-zero exit code when the script is undefined. This lets you run potentially undefined scripts without breaking the execution chain.</b>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

grunt/gulp

  • structure of Gruntfile.js
  • structure of gulpfile.js
A

https: //gruntjs.com/sample-gruntfile
https: //markgoodyear.com/2014/01/getting-started-with-gulp/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

bower/jspm

  • structrure of bower.json
  • structure of .bowerrrc
  • create own package
  • publish own package
A

https://bower.io/docs/creating-packages/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

karma

- collect code coverage

A

http://karma-runner.github.io/0.8/config/coverage.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Chrome developers tools

  • emulation of different devices
  • using timeline
  • CPU profiling
  • getting infrmation about networking
A

Simulate Mobile Devices with Device Mode in Chrome DevTools
Use Device Mode to approximate how your page looks and performs on a mobile device.
Device Mode is the name for the loose collection of features in Chrome DevTools that help you simulate mobile devices. These features include:
- Simulating a mobile viewport
- Throttling the network
- Throttling the CPU
- Simulating geolocation
- Setting orientation

Limitations
Think of Device Mode as a first-order approximation of how your page looks and feels on a mobile device. With Device Mode you don’t actually run your code on a mobile device. You simulate the mobile user experience from your laptop or desktop.
There are some aspects of mobile devices that DevTools will never be able to simulate. For example, the architecture of mobile CPUs is very different than the architecture of laptop or desktop CPUs. When in doubt, your best bet is to actually run your page on a mobile device. Use Remote Debugging to view, change, debug, and profile a page’s code from your laptop or desktop while it actually runs on a mobile device.

Features:
1. Simulate a mobile viewport
Click Toggle Device Toolbar Toggle Device Toolbar to open the UI that enables you to simulate a mobile viewport.
2. Responsive Viewport Mode
Drag the handles to resize the viewport to whatever dimensions you need. Or, enter specific values in the width and height boxes. In Figure 2, the width is set to 628 and the height is set to 662.
3. Show media queries
To show media query breakpoints above your viewport, click More options and then select Show media queries.
4. Set the device type
Use the Device Type list to simulate a mobile device or desktop device.
5. Mobile Device Viewport Mode
To simulate the dimensions of a specific mobile device, select the device from the Device list.
6. Rotate the viewport to landscape orientation
Click Rotate Rotate to rotate the viewport to landscape orientation.
7. Show device frame
When simulating the dimensions of a specific mobile device like an iPhone 6, open More options and then select Show device frame to show the physical device frame around the viewport.
8. Add a custom mobile device
9. Show rulers
Click More options and then select Show rulers to see rulers above and to the left of your viewport. The sizing unit of the rulers is pixels.
10. Zoom the viewport
Use the Zoom list to zoom in or out.
11. Throttle the network and CPU
To throttle the network and CPU, select Mid-tier mobile or Low-end mobile from the Throttle list.
Mid-tier mobile simulates fast 3G and throttles your CPU so that it is 4 times slower than normal. Low-end mobile simulates slow 3G and throttles your CPU 6 times slower than normal. Keep in mind that the throttling is relative to the normal capability of your laptop or desktop.
12. Throttle the CPU only
To throttle the CPU only and not the network, go to the Performance panel, click Capture Settings Capture Settings, and then select 4x slowdown or 6x slowdown from the CPU list.
13. Throttle the network only
To throttle the network only and not the CPU, go the Network panel and select Fast 3G or Slow 3G from the Throttle list.
14. Override geolocation
To open the geolocation overriding UI click Customize and control DevTools Customize and control DevTools and then select More tools > Sensors.
15. Set orientation
To open the orientation UI click Customize and control DevTools Customize and control DevTools and then select More tools > Sensors.

Performance:

https: //developers.google.com/web/tools/chrome-devtools/evaluate-performance/reference#main
https: //developers.google.com/web/tools/chrome-devtools/rendering-tools

Network Analysis:
https://developers.google.com/web/tools/chrome-devtools/network/reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

CI (teamcity/hudson/jenkins/strider)

  • why do we need to use CI
  • run bulds
A

In software engineering, continuous integration (CI) is the practice of merging all developers’ working copies to a shared mainline several times a day. Grady Booch first proposed the term CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.

The main aim of CI is to prevent integration problems, referred to as “integration hell” in early descriptions of XP. CI is not universally accepted as an improvement over frequent integration, so it is important to distinguish between the two as there is disagreement about the virtues of each.
In XP, CI was intended to be used in combination with automated unit tests written through the practices of test-driven development. Initially this was conceived of as running and passing all unit tests in the developer’s local environment before committing to the mainline. This helps avoid one developer’s work-in-progress breaking another developer’s copy. Where necessary, partially complete features can be disabled before committing, using feature toggles for instance.
Later elaborations of the concept introduced build servers, which automatically ran the unit tests periodically or even after every commit and reported the results to the developers. The use of build servers (not necessarily running unit tests) had already been practised by some teams outside the XP community. Nowadays, many organisations have adopted CI without adopting all of XP.
In addition to automated unit tests, organisations using CI typically use a build server to implement continuous processes of applying quality control in general — small pieces of effort, applied frequently. In addition to running the unit and integration tests, such processes run additional static and dynamic tests, measure and profile performance, extract and format documentation from the source code and facilitate manual QA processes. This continuous application of quality control aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development. This is very similar to the original idea of integrating more frequently to make integration easier, only applied to QA processes.
In the same vein, the practice of continuous delivery further extends CI by making sure the software checked in on the mainline is always in a state that can be deployed to users and makes the deployment process very rapid.

Workflow
When embarking on a change, a developer takes a copy of the current code base on which to work. As other developers submit changed code to the source code repository, this copy gradually ceases to reflect the repository code. Not only can the existing code base change, but new code can be added as well as new libraries, and other resources that create dependencies, and potential conflicts.
The longer development continues on a branch without merging back to the mainline, the greater the risk of multiple integration conflicts and failures when the developer branch is eventually merged back. When developers submit code to the repository they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.
Eventually, the repository may become so different from the developers’ baselines that they enter what is sometimes referred to as “merge hell”, or “integration hell”, where the time it takes to integrate exceeds the time it took to make their original changes.
Continuous integration involves integrating early and often, so as to avoid the pitfalls of “integration hell”. The practice aims to reduce rework and thus reduce cost and time.
A complementary practice to CI is that before submitting work, each programmer must do a complete build and run (and pass) all unit tests. Integration tests are usually run automatically on a CI
server when it detects a new commit.

Common practices
Continuous integration – the practice of frequently integrating one’s new or changed code with the existing code repository – should occur frequently enough that no intervening window remains between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately. Normal practice is to trigger these builds by every commit to a repository, rather than a periodically scheduled build. The practicalities of doing this in a multi-developer environment of rapid commits are such that it is usual to trigger a short time after each commit, then to start a build when either this timer expires, or after a rather longer interval since the last build. Note that since each new commit resets the timer used for the short time trigger, this is the same technique used in many button debouncing algorithms. In this way the commit events are “debounced” to prevent unnecessary builds between a series of rapid-fire commits. Many automated tools offer this scheduling automatically.
Another factor is the need for a version control system that supports atomic commits, i.e. all of a developer’s changes may be seen as a single commit operation. There is no point in trying to build from only half of the changed files.
To achieve these objectives, continuous integration relies on the following principles.
1. Maintain a code repository
This practice advocates the use of a revision control system for the project’s source code. All artifacts required to build the project should be placed in the repository. In this practice and in the revision control community, the convention is that the system should be buildable from a fresh checkout and not require additional dependencies. Extreme Programming advocate Martin Fowler also mentions that where branching is supported by tools, its use should be minimised. Instead, it is preferred for changes to be integrated rather than for multiple versions of the software to be maintained simultaneously. The mainline (or trunk) should be the place for the working version of the software.
2. Automate the build
A single command should have the capability of building the system. Many build tools, such as make, have existed for many years. Other more recent tools are frequently used in continuous integration environments. Automation of the build should include automating the integration, which often includes deployment into a production-like environment. In many cases, the build script not only compiles binaries, but also generates documentation, website pages, statistics and distribution media (such as Debian DEB, Red Hat RPM or Windows MSI files).
3. Make the build self-testing
Once the code is built, all tests should run to confirm that it behaves as the developers expect it to behave.
4. Everyone commits to the baseline every day
By committing regularly, every committer can reduce the number of conflicting changes. Checking in a week’s worth of work runs the risk of conflicting with other features and can be very difficult to resolve. Early, small conflicts in an area of the system cause team members to communicate about the change they are making. Committing all changes at least once a day (once per feature built) is generally considered part of the definition of Continuous Integration. In addition performing a nightly build is generally recommended. These are lower bounds; the typical frequency is expected to be much higher.
5. Every commit (to baseline) should be built
The system should build commits to the current working version to verify that they integrate correctly. A common practice is to use Automated Continuous Integration, although this may be done manually. Automated Continuous Integration employs a continuous integration server or daemon to monitor the revision control system for changes, then automatically run the build process.
6. Keep the build fast
The build needs to complete rapidly, so that if there is a problem with integration, it is quickly identified.
7. Test in a clone of the production environment
Having a test environment can lead to failures in tested systems when they deploy in the production environment because the production environment may differ from the test environment in a significant way. However, building a replica of a production environment is cost prohibitive. Instead, the test environment, or a separate pre-production environment (“staging”) should be built to be a scalable version of the production environment to alleviate costs while maintaining technology stack composition and nuances. Within these test environments, service virtualisation is commonly used to obtain on-demand access to dependencies (e.g., APIs, third-party applications, services, mainframes, etc.) that are beyond the team’s control, still evolving, or too complex to configure in a virtual test lab.
8. Make it easy to get the latest deliverables
Making builds readily available to stakeholders and testers can reduce the amount of rework necessary when rebuilding a feature that doesn’t meet requirements. Additionally, early testing reduces the chances that defects survive until deployment. Finding errors earlier can reduce the amount of work necessary to resolve them.
All programmers should start the day by updating the project from the repository. That way, they will all stay up to date.
9. Everyone can see the results of the latest build
It should be easy to find out whether the build breaks and, if so, who made the relevant change and what that change was.
10. Automate deployment
Most CI systems allow the running of scripts after a build finishes. In most situations, it is possible to write a script to deploy the application to a live test server that everyone can look at. A further advance in this way of thinking is continuous deployment, which calls for the software to be deployed directly into production, often with additional automation to prevent defects or regressions.

Continuous integration is intended to produce benefits such as:
- Integration bugs are detected early and are easy to track down due to small change sets. This saves both time and money over the lifespan of a project.
- Avoids last-minute chaos at release dates, when everyone tries to check in their slightly incompatible versions
- When unit tests fail or a bug emerges, if developers need to revert the codebase to a bug-free state without debugging, only a small number of changes are lost (because integration happens frequently)
- Constant availability of a “current” build for testing, demo, or release purposes
- Frequent code check-in pushes developers to create modular, less complex code
With continuous automated testing benefits can include:
- Enforces discipline of frequent automated testing
- Immediate feedback on system-wide impact of local changes
- Software metrics generated from automated testing and CI (such as metrics for code coverage, code complexity, and feature completeness) focus developers on developing functional, quality code, and help develop momentum in a team

Some downsides of continuous integration can include:

  • Constructing an automated test suite requires a considerable amount of work, including ongoing effort to cover new features and follow intentional code modifications. Testing is considered a best practice for software development in its own right, regardless of whether or not continuous integration is employed, and automation is an integral part of project methodologies like test-driven development. Continuous integration can be performed without any test suite, but the cost of quality assurance to produce a releasable product can be high if it must be done manually and frequently.
  • There is some work involved to set up a build system, and it can become complex, making it difficult to modify flexibly. However, there are a number of continuous integration software projects, both proprietary and open-source, which can be used.
  • Continuous Integration is not necessarily valuable if the scope of the project is small or contains untestable legacy code.
  • Value added depends on the quality of tests and how testable the code really is.
  • Larger teams means that new code is constantly added to the integration queue, so tracking deliveries (while preserving quality) is difficult and builds queueing up can slow down everyone.
  • With multiple commits and merges a day, partial code for a feature could easily be pushed and therefore integration tests will fail until the feature is complete.
  • Safety and mission-critical development assurance (e.g., DO-178C, ISO 26262) require rigorous documentation and in-process review that are difficult to achieve using Continuous Integration. This type of life cycle often requires additional steps be completed prior to product release when regulatory approval of the product is required.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Webpack

  • Performance
  • Resolve
  • Externals
  • Chunks
  • Hot Module Replacement – React
  • Caching
  • Shimming
A

Performance
These options allows you to control how webpack notifies you of assets and entry points that exceed a specific file limit.
APIs:
- performance
object
Configure how performance hints are shown. For example if you have an asset that is over 250kb, webpack will emit a warning notifying you of this.
- performance.hints
string = ‘warning’: ‘error’ | ‘warning’ boolean: false
Turns hints on/off. In addition, tells webpack to throw either an error or a warning when hints are found.
We recommend using hints: “error” during production builds to help prevent deploying production bundles that are too large, impacting webpage performance.
- performance.maxEntrypointSize
number = 250000
An entry point represents all assets that would be utilized during initial load time for a specific entry. This option controls when webpack should emit performance hints based on the maximum entry point size in bytes.
- performance.maxAssetSize
number = 250000
An asset is any emitted file from webpack. This option controls when webpack emits a performance hint based on individual asset size in bytes.
- performance.assetFilter
function(assetFilename) => boolean
This property allows webpack to control what files are used to calculate performance hints.

Link: https://webpack.js.org/configuration/performance/

Resolve
These options change how modules are resolved. webpack provides reasonable defaults, but it is possible to change the resolving in detail.
APIs:
- resolve
object
Configure how modules are resolved. For example, when calling import ‘lodash’ in ES2015, the resolve options can change where webpack goes to look for ‘lodash’
- resolve.alias
object
Create aliases to import or require certain modules more easily. For example, to alias a bunch of commonly used src/ folders:
webpack.config.js
module.exports = {
//…
resolve: {
alias: {
Utilities: path.resolve(__dirname, ‘src/utilities/’),
Templates: path.resolve(__dirname, ‘src/templates/’)
}
}
};
Now, instead of using relative paths when importing like so:
import Utility from ‘../../utilities/utility’;
you can use the alias:
import Utility from ‘Utilities/utility’;
A trailing $ can also be added to the given object’s keys to signify an exact match
- resolve.aliasFields
[string]: [‘browser’]
Specify a field, such as browser, to be parsed according to this specification.
- resolve.cacheWithContext
boolean (since webpack 3.1.0)
If unsafe cache is enabled, includes request.context in the cache key. This option is taken into account by the enhanced-resolve module. Since webpack 3.1.0 context in resolve caching is ignored when resolve or resolveLoader plugins are provided. This addresses a performance regression.
- resolve.enforceExtension
boolean = false
If true, it will not allow extension-less files. So by default require(‘./foo’) works if ./foo has a .js extension, but with this enabled only require(‘./foo.js’) will work.
- resolve.extensions
[string] = [‘.wasm’, ‘.mjs’, ‘.js’, ‘.json’]
Attempt to resolve these extensions in order.
If multiple files share the same name but have different extensions, webpack will resolve the one with the extension listed first in the array and skip the rest.
webpack.config.js
module.exports = {
//…
resolve: {
extensions: [‘.wasm’, ‘.mjs’, ‘.js’, ‘.json’]
}
};
which is what enables users to leave off the extension when importing:
import File from ‘../path/to/file’;
- resolve.mainFiles
[string] = [‘index’]
The filename to be used while resolving directories.
webpack.config.js
module.exports = {
//…
resolve: {
mainFiles: [‘index’]
}
};
- resolve.modules
[string] = [‘node_modules’]
Tell webpack what directories should be searched when resolving modules.
Absolute and relative paths can both be used, but be aware that they will behave a bit differently
A relative path will be scanned similarly to how Node scans for node_modules, by looking through the current directory as well as its ancestors (i.e. ./node_modules, ../node_modules, and on).
With an absolute path, it will only search in the given directory.
- resolve.plugins
[Plugin]
A list of additional resolve plugins which should be applied. It allows plugins such as DirectoryNamedWebpackPlugin.
- resolve.symlinks
boolean = true
Whether to resolve symlinks to their symlinked location.
When enabled, symlinked resources are resolved to their real path, not their symlinked location. Note that this may cause module resolution to fail when using tools that symlink packages (like npm link).
- resolve.cachePredicate
function(module) => boolean
A function which decides whether a request should be cached or not. An object is passed to the function with path and request properties. It must return a boolean.
- resolveLoader
object { modules [string] = [‘node_modules’], extensions [string] = [‘.js’, ‘.json’], mainFields [string] = [‘loader’, ‘main’]}
This set of options is identical to the resolve property set above, but is used only to resolve webpack’s loader packages.

Link: https://webpack.js.org/configuration/resolve/

Externals
The externals configuration option provides a way of excluding dependencies from the output bundles. Instead, the created bundle relies on that dependency to be present in the consumer’s environment. This feature is typically most useful to library developers, however there are a variety of applications for it.
externals
string object function regex
Prevent bundling of certain imported packages and instead retrieve these external dependencies at runtime.
For example, to include jQuery from a CDN instead of bundling it:
index.html

webpack.config.js
module.exports = {
//…
externals: {
jquery: ‘jQuery’
}
};
This leaves any dependent modules unchanged, i.e. the code shown below will still work:
import $ from ‘jquery’;
$(‘.my-element’).animate(/* … */);

Link: https://webpack.js.org/configuration/externals/

Code Splitting
This guide extends the examples provided in Getting Started and Output Management. Please make sure you are at least familiar with the examples provided in them.
Code splitting is one of the most compelling features of webpack. This feature allows you to split your code into various bundles which can then be loaded on demand or in parallel. It can be used to achieve smaller bundles and control resource load prioritization which, if used correctly, can have a major impact on load time.
There are three general approaches to code splitting available:
- Entry Points: Manually split code using entry configuration.
- Prevent Duplication: Use the SplitChunksPlugin to dedupe and split chunks.
- Dynamic Imports: Split code via inline function calls within modules.

Entry Points
This is by far the easiest and most intuitive way to split code. However, it is more manual and has some pitfalls we will go over. Let’s take a look at how we might split another module from the main bundle:
project
webpack-demo
|- package.json
|- webpack.config.js
|- /dist
|- /src
|- index.js
+ |- another-module.js
|- /node_modules
another-module.js
import _ from ‘lodash’;
console.log(
_.join([‘Another’, ‘module’, ‘loaded!’], ‘ ‘)
);
webpack.config.js
const path = require(‘path’);
module.exports = {
mode: ‘development’,
entry: {
index: ‘./src/index.js’,
+ another: ‘./src/another-module.js’,
},
output: {
filename: ‘[name].bundle.js’,
path: path.resolve(__dirname, ‘dist’),
},
};
This will yield the following build result:

Asset Size Chunks Chunk Names
another.bundle.js 550 KiB another [emitted] another
index.bundle.js 550 KiB index [emitted] index
Entrypoint index = index.bundle.js
Entrypoint another = another.bundle.js

As mentioned there are some pitfalls to this approach:
- If there are any duplicated modules between entry chunks they will be included in both bundles.
- It isn’t as flexible and can’t be used to dynamically split code with the core application logic.
- The first of these two points is definitely an issue for our example, as lodash is also imported within ./src/index.js and will thus be duplicated in both bundles. Let’s remove this duplication by using the SplitChunksPlugin.

Prevent Duplication
The SplitChunksPlugin allows us to extract common dependencies into an existing entry chunk or an entirely new chunk. Let’s use this to de-duplicate the lodash dependency from the previous example:
webpack.config.js

const path = require(‘path’);

module.exports = {
mode: ‘development’,
entry: {
index: ‘./src/index.js’,
another: ‘./src/another-module.js’,
},
output: {
filename: ‘[name].bundle.js’,
path: path.resolve(__dirname, ‘dist’),
},
+ optimization: {
+ splitChunks: {
+ chunks: ‘all’,
+ },
+ },
};
With the optimization.splitChunks configuration option in place, we should now see the duplicate dependency removed from our index.bundle.js and another.bundle.js. The plugin should notice that we’ve separated lodash out to a separate chunk and remove the dead weight from our main bundle. Let’s do an npm run build to see if it worked:

Asset Size Chunks Chunk Names
another.bundle.js 5.95 KiB another [emitted] another
index.bundle.js 5.89 KiB index [emitted] index
vendors~another~index.bundle.js 547 KiB vendors~another~index [emitted] vendors~another~index
Entrypoint index = vendors~another~index.bundle.js index.bundle.js
Entrypoint another = vendors~another~index.bundle.js another.bundle.js

Dynamic Imports
Two similar techniques are supported by webpack when it comes to dynamic code splitting. The first and recommended approach is to use the import() syntax that conforms to the ECMAScript proposal for dynamic imports. The legacy, webpack-specific approach is to use require.ensure. Let's try using the first of these two approaches...
import() calls use promises internally. If you use import() with older browsers, remember to shim Promise using a polyfill such as es6-promise or promise-polyfill.
As import() returns a promise, it can be used with async functions. However, this requires using a pre-processor like Babel and the Syntax Dynamic Import Babel Plugin. 
It is possible to provide a dynamic expression to import() when you might need to import specific module based on a computed variable later.

Prefetching/Preloading modules
webpack 4.6.0+ adds support for prefetching and preloading.
Using these inline directives while declaring your imports allows webpack to output “Resource Hint” which tells the browser that for:
prefetch: resource is probably needed for some navigation in the future
preload: resource might be needed during the current navigation
Preload directive has a bunch of differences compared to prefetch:
- A preloaded chunk starts loading in parallel to the parent chunk. A prefetched chunk starts after the parent chunk finishes loading.
- A preloaded chunk has medium priority and is instantly downloaded. A prefetched chunk is downloaded while the browser is idle.
- A preloaded chunk should be instantly requested by the parent chunk. A prefetched chunk can be used anytime in the future.
- Browser support is different.

Link: https://webpack.js.org/guides/code-splitting/

Hot Module Replacement
Hot Module Replacement (or HMR) is one of the most useful features offered by webpack. It allows all kinds of modules to be updated at runtime without the need for a full refresh.
Hot Module Replacement (HMR) exchanges, adds, or removes modules while an application is running, without a full reload. This can significantly speed up development in a few ways:
- Retain application state which is lost during a full reload.
- Save valuable development time by only updating what’s changed.
- Instantly update the browser when modifications are made to CSS/JS in the source code, which is almost comparable to changing styles directly in the browser’s dev tools.

Enabling HMR
This feature is great for productivity. All we need to do is update our webpack-dev-server configuration, and use webpack's built-in HMR plugin.
devServer: {
      contentBase: './dist',
\+     hot: true,
}

Links:

https: //webpack.js.org/guides/hot-module-replacement/
https: //webpack.js.org/concepts/hot-module-replacement/

Caching
So we’re using webpack to bundle our modular application which yields a deployable /dist directory. Once the contents of /dist have been deployed to a server, clients (typically browsers) will hit that server to grab the site and its assets. The last step can be time consuming, which is why browsers use a technique called caching. This allows sites to load faster with less unnecessary network traffic. However, it can also cause headaches when you need new code to be picked up.
This guide focuses on the configuration needed to ensure files produced by webpack compilation can remain cached unless their content has changed.

Output Filenames
We can use the output.filename substitutions setting to define the names of our output files. webpack provides a method of templating the filenames using bracketed strings called substitutions. The [contenthash] substitution will add a unique hash based on the content of an asset. When the asset’s content changes, [contenthash] will change as well.

Extracting Boilerplate
As we learned in code splitting, the SplitChunksPlugin can be used to split modules out into separate bundles. webpack provides an optimization feature to split runtime code into a separate chunk using the optimization.runtimeChunk option. Set it to single to create a single runtime bundle for all chunks

Link: https://webpack.js.org/guides/caching/

Shimming
The webpack compiler can understand modules written as ES2015 modules, CommonJS or AMD. However, some third party libraries may expect global dependencies (e.g. $ for jQuery). The libraries might also create globals which need to be exported. These “broken modules” are one instance where shimming comes into play.
nother instance where shimming can be useful is when you want to polyfill browser functionality to support more users. In this case, you may only want to deliver those polyfills to the browsers that need patching (i.e. load them on demand).

Loading Polyfills
There’s a lot of ways to load polyfills. For example, to include the babel-polyfill we might simply:
npm install –save babel-polyfill
and import it so as to include it in our main bundle:
src/index.js
+ import ‘babel-polyfill’;
Note that this approach prioritizes correctness over bundle size. To be safe and robust, polyfills/shims must run before all other code, and thus either need to load synchronously, or, all app code needs to load after all polyfills/shims load. There are many misconceptions in the community, as well, that modern browsers “don’t need” polyfills, or that polyfills/shims merely serve to add missing features - in fact, they often repair broken implementations, even in the most modern of browsers. The best practice thus remains to unconditionally and synchronously load all polyfills/shims, despite the bundle size cost this incurs.

Link: https://webpack.js.org/guides/shimming/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Typescript

  • Advanced Types
  • Project Configuration
  • Declaration Files
A

Advanced types
1. Intersection Types
An intersection type combines multiple types into one. This allows you to add together existing types to get a single type that has all the features you need. For example, Person & Serializable & Loggable is a Person and Serializable and Loggable. That means an object of this type will have all members of all three types.
2. Union Types
A union type describes a value that can be one of several types. We use the vertical bar (|) to separate each type, so number | string | boolean is the type of a value that can be a number, a string, or a boolean.
3. Discriminated Unions
You can combine singleton types, union types, type guards, and type aliases to build an advanced pattern called discriminated unions, also known as tagged unions or algebraic data types. Discriminated unions are useful in functional programming. Some languages automatically discriminate unions for you; TypeScript instead builds on JavaScript patterns as they exist today. There are three ingredients:
- Types that have a common, singleton type property — the discriminant.
- A type alias that takes the union of those types — the union.
- Type guards on the common property.
4. Index types
With index types, you can get the compiler to check code that uses dynamic property names. For example, a common JavaScript pattern is to pick a subset of properties from an object
let carProps: keyof Car; // the union of (‘manufacturer’ | ‘model’ | ‘year’)
5. Mapped types
A common task is to take an existing type and make each of its properties optional:
interface PersonPartial {
name?: string;
age?: number;
}
Or we might want a readonly version:
interface PersonReadonly {
readonly name: string;
readonly age: number;
}
This happens often enough in JavaScript that TypeScript provides a way to create new types based on old types — mapped types. In a mapped type, the new type transforms each property in the old type in the same way. For example, you can make all properties of a type readonly or optional. Here are a couple of examples:
type Readonly = {
readonly [P in keyof T]: T[P];
}
type Partial = {
[P in keyof T]?: T[P];
}
And to use it:
type PersonPartial = Partial;
type ReadonlyPerson = Readonly;
6. Conditional Types
TypeScript 2.8 introduces conditional types which add the ability to express non-uniform type mappings. A conditional type selects one of two possible types based on a condition expressed as a type relationship test:

T extends U ? X : Y
The type above means when T is assignable to U the type is X, otherwise the type is Y.

A conditional type T extends U ? X : Y is either resolved to X or Y, or deferred because the condition depends on one or more type variables. When T or U contains type variables, whether to resolve to X or Y, or to defer, is determined by whether or not the type system has enough information to conclude that T is always assignable to U.

Link: https://www.typescriptlang.org/docs/handbook/advanced-types.html

Project Configuration
tsconfig.json
The presence of a tsconfig.json file in a directory indicates that the directory is the root of a TypeScript project. The tsconfig.json file specifies the root files and the compiler options required to compile the project. A project is compiled in one of the following ways:

Using tsconfig.json
By invoking tsc with no input files, in which case the compiler searches for the tsconfig.json file starting in the current directory and continuing up the parent directory chain.
By invoking tsc with no input files and a –project (or just -p) command line option that specifies the path of a directory containing a tsconfig.json file, or a path to a valid .json file containing the configurations.
When input files are specified on the command line, tsconfig.json files are ignored.

Examples
Example tsconfig.json files:
Using the "files" property
{
    "compilerOptions": {
        "module": "commonjs",
        "noImplicitAny": true,
        "removeComments": true,
        "preserveConstEnums": true,
        "sourceMap": true
    },
    "files": [
        "core.ts",
        "sys.ts",
        "types.ts",
        "scanner.ts",
        "parser.ts",
        "utilities.ts",
        "binder.ts",
        "checker.ts",
        "emitter.ts",
        "program.ts",
        "commandLineParser.ts",
        "tsc.ts",
        "diagnosticInformationMap.generated.ts"
    ]
}
Using the "include" and "exclude" properties
{
    "compilerOptions": {
        "module": "system",
        "noImplicitAny": true,
        "removeComments": true,
        "preserveConstEnums": true,
        "outFile": "../../built/local/tsc.js",
        "sourceMap": true
    },
    "include": [
        "src/**/*"
    ],
    "exclude": [
        "node_modules",
        "**/*.spec.ts"
    ]
}

Link: https://www.typescriptlang.org/docs/handbook/tsconfig-json.html

Declaration Files
Link: https://www.typescriptlang.org/docs/handbook/declaration-files/library-structures.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Eslint

  • using cli
  • configuring eslint and .eslintrc. structure
  • formatters
A

Using CLI
Link: https://eslint.org/docs/user-guide/command-line-interface

Configuring eslint and .eslintrc. structure
Link: https://eslint.org/docs/user-guide/configuring

Formatters
Link: https://eslint.org/docs/user-guide/formatters/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Yarn

  • Installation(macOS/Windows/Linux)
  • Configuration (yarn.lock and package.json)
  • Managing dependencies(add/update/remove)
  • Offline mirror
A

Installation
Link: https://yarnpkg.com/en/docs/install#debian-stable

Configuration
Link: https://yarnpkg.com/en/docs/configuration

Managing dependencies
Link:
https://yarnpkg.com/en/docs/cli/add
https://yarnpkg.com/en/docs/cli/upgrade
https://yarnpkg.com/en/docs/cli/remove
How well did you know this?
1
Not at all
2
3
4
5
Perfectly