This commit is contained in:
2025-05-12 05:38:44 +09:00
parent dced21c3f8
commit 6d78bfa46e
8120 changed files with 1161564 additions and 0 deletions

48
book/node_modules/mensch/HISTORY.md generated vendored Normal file
View File

@@ -0,0 +1,48 @@
# mensch history
## 0.3.4 - 09 Nov 2019
- Fix global leak and stringify indentation
## 0.3.3 - 04 Aug 2016
- Added Bower.json
- Preserve property spacing when value starts with "special/unexpected" chars '(@*/{):'.
## 0.3.2 - 18 Aug 2015
- Fixed column/line computation for comments and at-rules (Fix #15)
- Closing } and ; now takes precedence over declaration value parsing (Fix #14)
- Ignore curly braces in strings (Fix #13)
- Keep \n and \t inside values and consider them "whitespace" (Fix #12)
- Fixed column count in positions for rows after the first line (Fix #18)
- Enabled running test suite under Windows (CRLF vs LF issues)
## 0.3.1 - 1 Dec 2013
- Retain whitespace in selectors. Closes #8
- Add support for `@-ms-keyframes`.
## 0.3.0 - 23 Nov 2013
- Improve handling of quotes in values. Closes #5
- Add support for `@document` (and `@-moz-document`), `@namespace` and `@page`.
## 0.2.1 - 20 Sep 2013
- Trim whitespace around grouped selectors.
## 0.2.0 - 18 Sep 2013
- Correctly handle comments as children of at-groups. Fix #2
## 0.1.0 - 17 Jun 2013
- Added new boolean `position` option to `parse()`, which will include position
data in the AST when enabled.
- Moved node.selector to node.selectors, and changed the value to an array.
- Various parser improvements and bug fixes.
## 0.0.1 - 11 Jun 2013
- Initial release.

21
book/node_modules/mensch/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
Copyright (c) Brett Stimmerman <brettstimmerman@gmail.com>
This software is released under the MIT license:
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the 'Software'), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

130
book/node_modules/mensch/README.md generated vendored Normal file
View File

@@ -0,0 +1,130 @@
# mensch [![Build Status](https://secure.travis-ci.org/brettstimmerman/mensch.png?branch=master)](http://travis-ci.org/brettstimmerman/mensch)
A decent CSS parser.
# usage
```sh
npm install mensch
```
```js
var mensch = require('mensch');
var ast = mensch.parse('p { color: black; }');
var css = mensch.stringify(ast);
console.log(css);
// => p { color: black; }
```
# api
## parse(css, [options={}])
Convert a CSS string or an array of lexical tokens into a `stringify`-able AST.
- `css` {String|Array} CSS string or array of lexical tokens
- `[options]` {Object}
- `[options.comments=false]` {Boolean} Allow comment nodes in the AST.
- `[options.position=false]` {Boolean} Allow line/column position in the AST.
When `{position: true}`, AST node will have a `position` property:
```js
{
type: 'comment',
text: ' Hello World! ',
position: {
start: { line: 1, col: 1 },
end: { line 1, col: 18 }
}
}
```
## stringify(ast, [options={}])
Convert a `stringify`-able AST into a CSS string.
- `ast` {Object} A `stringify`-able AST
- `[options]` {Object}
- `[options.comments=false]` {Boolean} Allow comments in the stringified CSS.
- `[options.indentation='']` {String} E.g., `indentation: ' '` will indent by
two spaces.
## lex(css)
Convert a CSS string to an array of lexical tokens for use with `.parse()`.
- `css` {String} CSS
# non-validating
Mensch is a non-validating CSS parser. While it can handle the major language
constructs just fine, and it can recover from gaffes like mis-matched braces and
missing or extraneous semi-colons, mensch can't tell you when it finds
invalid CSS like a misspelled property name or a misplaced `@import`.
# comments
Unlike most CSS parsers, mensch allows comments to be represented in the AST and
subsequently stringified with the `{comments: true}` option.
```js
var options = { comments: true };
```
```js
var ast = mensch.parse('.red { color: red; /* Natch. */ }', options);
var css = mensch.stringify(ast, options);
console.log(css);
//=> .red { color: red; /* Natch. */ }
```
However, comments within the context of a selector, property, etc., will be
ignored. These comments are difficult to represent in the AST.
```js
var ast = mench.parse('.red /*1*/ { color /*2*/: /*3*/ red /*4*/; }', options);
var css = mesch.stringify(ast, options);
console.log(css);
//=> .red { color: red; }
```
# ast
The structure of mensch's AST riffs on several existing CSS parsers, but it
might not be 100% compatible with other CSS parsers. Here it is in a nutshell:
```js
{
type: 'stylesheet'
stylesheet: {
rules: [{
type: 'rule',
selectors: ['.foo'],
declarations: [{
type: 'property',
name: 'color',
value: 'black'
}]
}]
}
}
```
# credits
Mensch is based on several existing CSS parsers, but
[nzakas/parser-lib](https://github.com/nzakas/parser-lib) and
[visionmedia/css](https://github.com/visionmedia/css) are notable influences.
# known users
[voidlabs/mosaico](https://github.com/voidlabs/mosaico) uses Mensch parser to parse custom-flavored CSS rules in email templates and make the template editable: positions, comment parsing, multiple declarations for the same property have been keys to the choice of Mensch!
[Automattic/juice](https://github.com/Automattic/juice) moved to Mensch CSS parser since 3.0 release in order to fix dozen of issues with the previous parser, expecially with support for "multiple properties declarations" in the same ruleset and with invalid values.
Please let us know if you use Mensch in your library!

27
book/node_modules/mensch/bower.json generated vendored Normal file
View File

@@ -0,0 +1,27 @@
{
"name": "mensch",
"main": "index.js",
"version": "0.3.4",
"homepage": "https://github.com/brettstimmerman/mensch",
"authors": [
"Brett Stimmerman <brettstimmerman@gmail.com>"
],
"description": "A decent CSS parser",
"moduleType": [
"node"
],
"keywords": [
"css",
"parser",
"parsing",
"stylesheet"
],
"license": "MIT",
"ignore": [
"**/.*",
"node_modules",
"bower_components",
"test",
"tests"
]
}

5
book/node_modules/mensch/index.js generated vendored Normal file
View File

@@ -0,0 +1,5 @@
module.exports = {
lex : require('./lib/lexer'),
parse: require('./lib/parser'),
stringify: require('./lib/stringify')
};

11
book/node_modules/mensch/lib/debug.js generated vendored Normal file
View File

@@ -0,0 +1,11 @@
exports = module.exports = debug;
function debug(label) {
return _debug.bind(null, label);
}
function _debug(label) {
var args = [].slice.call(arguments, 1);
args.unshift('[' + label + ']');
process.stderr.write(args.join(' ') + '\n');
}

692
book/node_modules/mensch/lib/lexer.js generated vendored Normal file
View File

@@ -0,0 +1,692 @@
var DEBUG = false; // `true` to print debugging info.
var TIMER = false; // `true` to time calls to `lex()` and print the results.
var debug = require('./debug')('lex');
exports = module.exports = lex;
/**
* Convert a CSS string into an array of lexical tokens.
*
* @param {String} css CSS
* @returns {Array} lexical tokens
*/
function lex(css) {
var start; // Debug timer start.
var buffer = ''; // Character accumulator
var ch; // Current character
var column = 0; // Current source column number
var cursor = -1; // Current source cursor position
var depth = 0; // Current nesting depth
var line = 1; // Current source line number
var state = 'before-selector'; // Current state
var stack = [state]; // State stack
var token = {}; // Current token
var tokens = []; // Token accumulator
// Supported @-rules, in roughly descending order of usage probability.
var atRules = [
'media',
'keyframes',
{ name: '-webkit-keyframes', type: 'keyframes', prefix: '-webkit-' },
{ name: '-moz-keyframes', type: 'keyframes', prefix: '-moz-' },
{ name: '-ms-keyframes', type: 'keyframes', prefix: '-ms-' },
{ name: '-o-keyframes', type: 'keyframes', prefix: '-o-' },
'font-face',
{ name: 'import', state: 'before-at-value' },
{ name: 'charset', state: 'before-at-value' },
'supports',
'viewport',
{ name: 'namespace', state: 'before-at-value' },
'document',
{ name: '-moz-document', type: 'document', prefix: '-moz-' },
'page'
];
// -- Functions ------------------------------------------------------------
/**
* Advance the character cursor and return the next character.
*
* @returns {String} The next character.
*/
function getCh() {
skip();
return css[cursor];
}
/**
* Return the state at the given index in the stack.
* The stack is LIFO so indexing is from the right.
*
* @param {Number} [index=0] Index to return.
* @returns {String} state
*/
function getState(index) {
return index ? stack[stack.length - 1 - index] : state;
}
/**
* Look ahead for a string beginning from the next position. The string
* being looked for must start at the next position.
*
* @param {String} str The string to look for.
* @returns {Boolean} Whether the string was found.
*/
function isNextString(str) {
var start = cursor + 1;
return (str === css.slice(start, start + str.length));
}
/**
* Find the start position of a substring beginning from the next
* position. The string being looked for may begin anywhere.
*
* @param {String} str The substring to look for.
* @returns {Number|false} The position, or `false` if not found.
*/
function find(str) {
var pos = css.slice(cursor).indexOf(str);
return pos > 0 ? pos : false;
}
/**
* Determine whether a character is next.
*
* @param {String} ch Character.
* @returns {Boolean} Whether the character is next.
*/
function isNextChar(ch) {
return ch === peek(1);
}
/**
* Return the character at the given cursor offset. The offset is relative
* to the cursor, so negative values move backwards.
*
* @param {Number} [offset=1] Cursor offset.
* @returns {String} Character.
*/
function peek(offset) {
return css[cursor + (offset || 1)];
}
/**
* Remove the current state from the stack and set the new current state.
*
* @returns {String} The removed state.
*/
function popState() {
var removed = stack.pop();
state = stack[stack.length - 1];
return removed;
}
/**
* Set the current state and add it to the stack.
*
* @param {String} newState The new state.
* @returns {Number} The new stack length.
*/
function pushState(newState) {
state = newState;
stack.push(state);
return stack.length;
}
/**
* Replace the current state with a new state.
*
* @param {String} newState The new state.
* @returns {String} The replaced state.
*/
function replaceState(newState) {
var previousState = state;
stack[stack.length - 1] = state = newState;
return previousState;
}
/**
* Move the character cursor. Positive numbers move the cursor forward.
* Negative numbers are not supported!
*
* @param {Number} [n=1] Number of characters to skip.
*/
function skip(n) {
if ((n || 1) == 1) {
if (css[cursor] == '\n') {
line++;
column = 1;
} else {
column++;
}
cursor++;
} else {
var skipStr = css.slice(cursor, cursor + n).split('\n');
if (skipStr.length > 1) {
line += skipStr.length - 1;
column = 1;
}
column += skipStr[skipStr.length - 1].length;
cursor = cursor + n;
}
}
/**
* Add the current token to the pile and reset the buffer.
*/
function addToken() {
token.end = {
line: line,
col: column
};
DEBUG && debug('addToken:', JSON.stringify(token, null, 2));
tokens.push(token);
buffer = '';
token = {};
}
/**
* Set the current token.
*
* @param {String} type Token type.
*/
function initializeToken(type) {
token = {
type: type,
start: {
line: line,
col : column
}
};
}
// -- Main Loop ------------------------------------------------------------
/*
The main loop is a state machine that reads in one character at a time,
and determines what to do based on the current state and character.
This is implemented as a series of nested `switch` statements and the
case orders have been mildly optimized based on rough probabilities
calculated by processing a small sample of real-world CSS.
Further optimization (such as a dispatch table) shouldn't be necessary
since the total number of cases is very low.
*/
TIMER && (start = Date.now());
while (ch = getCh()) {
DEBUG && debug(ch, getState());
// column += 1;
switch (ch) {
// Space
case ' ':
switch (getState()) {
case 'selector':
case 'value':
case 'value-paren':
case 'at-group':
case 'at-value':
case 'comment':
case 'double-string':
case 'single-string':
buffer += ch;
break;
}
break;
// Newline or tab
case '\n':
case '\t':
case '\r':
case '\f':
switch (getState()) {
case 'value':
case 'value-paren':
case 'at-group':
case 'comment':
case 'single-string':
case 'double-string':
case 'selector':
buffer += ch;
break;
case 'at-value':
// Tokenize an @-rule if a semi-colon was omitted.
if ('\n' === ch) {
token.value = buffer.trim();
addToken();
popState();
}
break;
}
// if ('\n' === ch) {
// column = 0;
// line += 1;
// }
break;
case ':':
switch (getState()) {
case 'name':
token.name = buffer.trim();
buffer = '';
replaceState('before-value');
break;
case 'before-selector':
buffer += ch;
initializeToken('selector');
pushState('selector');
break;
case 'before-value':
replaceState('value');
buffer += ch;
break;
default:
buffer += ch;
break;
}
break;
case ';':
switch (getState()) {
case 'name':
case 'before-value':
case 'value':
// Tokenize a declaration
// if value is empty skip the declaration
if (buffer.trim().length > 0) {
token.value = buffer.trim(),
addToken();
}
replaceState('before-name');
break;
case 'value-paren':
// Insignificant semi-colon
buffer += ch;
break;
case 'at-value':
// Tokenize an @-rule
token.value = buffer.trim();
addToken();
popState();
break;
case 'before-name':
// Extraneous semi-colon
break;
default:
buffer += ch;
break;
}
break;
case '{':
switch (getState()) {
case 'selector':
// If the sequence is `\{` then assume that the brace should be escaped.
if (peek(-1) === '\\') {
buffer += ch;
break;
}
// Tokenize a selector
token.text = buffer.trim();
addToken();
replaceState('before-name');
depth = depth + 1;
break;
case 'at-group':
// Tokenize an @-group
token.name = buffer.trim();
// XXX: @-rules are starting to get hairy
switch (token.type) {
case 'font-face':
case 'viewport' :
case 'page' :
pushState('before-name');
break;
default:
pushState('before-selector');
}
addToken();
depth = depth + 1;
break;
case 'name':
case 'at-rule':
// Tokenize a declaration or an @-rule
token.name = buffer.trim();
addToken();
pushState('before-name');
depth = depth + 1;
break;
case 'comment':
case 'double-string':
case 'single-string':
// Ignore braces in comments and strings
buffer += ch;
break;
case 'before-value':
replaceState('value');
buffer += ch;
break;
}
break;
case '}':
switch (getState()) {
case 'before-name':
case 'name':
case 'before-value':
case 'value':
// If the buffer contains anything, it is a value
if (buffer) {
token.value = buffer.trim();
}
// If the current token has a name and a value it should be tokenized.
if (token.name && token.value) {
addToken();
}
// Leave the block
initializeToken('end');
addToken();
popState();
// We might need to leave again.
// XXX: What about 3 levels deep?
if ('at-group' === getState()) {
initializeToken('at-group-end');
addToken();
popState();
}
if (depth > 0) {
depth = depth - 1;
}
break;
case 'at-group':
case 'before-selector':
case 'selector':
// If the sequence is `\}` then assume that the brace should be escaped.
if (peek(-1) === '\\') {
buffer += ch;
break;
}
if (depth > 0) {
// Leave block if in an at-group
if ('at-group' === getState(1)) {
initializeToken('at-group-end');
addToken();
}
}
if (depth > 1) {
popState();
}
if (depth > 0) {
depth = depth - 1;
}
break;
case 'double-string':
case 'single-string':
case 'comment':
// Ignore braces in comments and strings.
buffer += ch;
break;
}
break;
// Strings
case '"':
case "'":
switch (getState()) {
case 'double-string':
if ('"' === ch && '\\' !== peek(-1)) {
popState();
}
break;
case 'single-string':
if ("'" === ch && '\\' !== peek(-1)) {
popState();
}
break;
case 'before-at-value':
replaceState('at-value');
pushState('"' === ch ? 'double-string' : 'single-string');
break;
case 'before-value':
replaceState('value');
pushState('"' === ch ? 'double-string' : 'single-string');
break;
case 'comment':
// Ignore strings within comments.
break;
default:
if ('\\' !== peek(-1)) {
pushState('"' === ch ? 'double-string' : 'single-string');
}
}
buffer += ch;
break;
// Comments
case '/':
switch (getState()) {
case 'comment':
case 'double-string':
case 'single-string':
// Ignore
buffer += ch;
break;
case 'before-value':
case 'selector':
case 'name':
case 'value':
if (isNextChar('*')) {
// Ignore comments in selectors, properties and values. They are
// difficult to represent in the AST.
var pos = find('*/');
if (pos) {
skip(pos + 1);
}
} else {
if (getState() == 'before-value') replaceState('value');
buffer += ch;
}
break;
default:
if (isNextChar('*')) {
// Create a comment token
initializeToken('comment');
pushState('comment');
skip();
}
else {
buffer += ch;
}
break;
}
break;
// Comment end or universal selector
case '*':
switch (getState()) {
case 'comment':
if (isNextChar('/')) {
// Tokenize a comment
token.text = buffer; // Don't trim()!
skip();
addToken();
popState();
}
else {
buffer += ch;
}
break;
case 'before-selector':
buffer += ch;
initializeToken('selector');
pushState('selector');
break;
case 'before-value':
replaceState('value');
buffer += ch;
break;
default:
buffer += ch;
}
break;
// @-rules
case '@':
switch (getState()) {
case 'comment':
case 'double-string':
case 'single-string':
buffer += ch;
break;
case 'before-value':
replaceState('value');
buffer += ch;
break;
default:
// Iterate over the supported @-rules and attempt to tokenize one.
var tokenized = false;
var name;
var rule;
for (var j = 0, len = atRules.length; !tokenized && j < len; ++j) {
rule = atRules[j];
name = rule.name || rule;
if (!isNextString(name)) { continue; }
tokenized = true;
initializeToken(name);
pushState(rule.state || 'at-group');
skip(name.length);
if (rule.prefix) {
token.prefix = rule.prefix;
}
if (rule.type) {
token.type = rule.type;
}
}
if (!tokenized) {
// Keep on truckin' America!
buffer += ch;
}
break;
}
break;
// Parentheses are tracked to disambiguate semi-colons, such as within a
// data URI.
case '(':
switch (getState()) {
case 'value':
pushState('value-paren');
break;
case 'before-value':
replaceState('value');
break;
}
buffer += ch;
break;
case ')':
switch (getState()) {
case 'value-paren':
popState();
break;
case 'before-value':
replaceState('value');
break;
}
buffer += ch;
break;
default:
switch (getState()) {
case 'before-selector':
initializeToken('selector');
pushState('selector');
break;
case 'before-name':
initializeToken('property');
replaceState('name');
break;
case 'before-value':
replaceState('value');
break;
case 'before-at-value':
replaceState('at-value');
break;
}
buffer += ch;
break;
}
}
TIMER && debug('ran in', (Date.now() - start) + 'ms');
return tokens;
}

291
book/node_modules/mensch/lib/parser.js generated vendored Normal file
View File

@@ -0,0 +1,291 @@
var DEBUG = false; // `true` to print debugging info.
var TIMER = false; // `true` to time calls to `parse()` and print the results.
var debug = require('./debug')('parse');
var lex = require('./lexer');
exports = module.exports = parse;
var _comments; // Whether comments are allowed.
var _depth; // Current block nesting depth.
var _position; // Whether to include line/column position.
var _tokens; // Array of lexical tokens.
/**
* Convert a CSS string or array of lexical tokens into a `stringify`-able AST.
*
* @param {String} css CSS string or array of lexical token
* @param {Object} [options]
* @param {Boolean} [options.comments=false] allow comment nodes in the AST
* @returns {Object} `stringify`-able AST
*/
function parse(css, options) {
var start; // Debug timer start.
options || (options = {});
_comments = !!options.comments;
_position = !!options.position;
_depth = 0;
// Operate on a copy of the given tokens, or the lex()'d CSS string.
_tokens = Array.isArray(css) ? css.slice() : lex(css);
var rule;
var rules = [];
var token;
TIMER && (start = Date.now());
while ((token = next())) {
rule = parseToken(token);
rule && rules.push(rule);
}
TIMER && debug('ran in', (Date.now() - start) + 'ms');
return {
type: "stylesheet",
stylesheet: {
rules: rules
}
};
}
// -- Functions --------------------------------------------------------------
/**
* Build an AST node from a lexical token.
*
* @param {Object} token lexical token
* @param {Object} [override] object hash of properties that override those
* already in the token, or that will be added to the token.
* @returns {Object} AST node
*/
function astNode(token, override) {
override || (override = {});
var key;
var keys = ['type', 'name', 'value'];
var node = {};
// Avoiding [].forEach for performance reasons.
for (var i = 0; i < keys.length; ++i) {
key = keys[i];
if (token[key]) {
node[key] = override[key] || token[key];
}
}
keys = Object.keys(override);
for (i = 0; i < keys.length; ++i) {
key = keys[i];
if (!node[key]) {
node[key] = override[key];
}
}
if (_position) {
node.position = {
start: token.start,
end: token.end
};
}
DEBUG && debug('astNode:', JSON.stringify(node, null, 2));
return node;
}
/**
* Remove a lexical token from the stack and return the removed token.
*
* @returns {Object} lexical token
*/
function next() {
var token = _tokens.shift();
DEBUG && debug('next:', JSON.stringify(token, null, 2));
return token;
}
// -- Parse* Functions ---------------------------------------------------------
/**
* Convert an @-group lexical token to an AST node.
*
* @param {Object} token @-group lexical token
* @returns {Object} @-group AST node
*/
function parseAtGroup(token) {
_depth = _depth + 1;
// As the @-group token is assembled, relevant token values are captured here
// temporarily. They will later be used as `tokenize()` overrides.
var overrides = {};
switch (token.type) {
case 'font-face':
case 'viewport' :
overrides.declarations = parseDeclarations();
break;
case 'page':
overrides.prefix = token.prefix;
overrides.declarations = parseDeclarations();
break;
default:
overrides.prefix = token.prefix;
overrides.rules = parseRules();
}
return astNode(token, overrides);
}
/**
* Convert an @import lexical token to an AST node.
*
* @param {Object} token @import lexical token
* @returns {Object} @import AST node
*/
function parseAtImport(token) {
return astNode(token);
}
/**
* Convert an @charset token to an AST node.
*
* @param {Object} token @charset lexical token
* @returns {Object} @charset node
*/
function parseCharset(token) {
return astNode(token);
}
/**
* Convert a comment token to an AST Node.
*
* @param {Object} token comment lexical token
* @returns {Object} comment node
*/
function parseComment(token) {
return astNode(token, {text: token.text});
}
function parseNamespace(token) {
return astNode(token);
}
/**
* Convert a property lexical token to a property AST node.
*
* @returns {Object} property node
*/
function parseProperty(token) {
return astNode(token);
}
/**
* Convert a selector lexical token to a selector AST node.
*
* @param {Object} token selector lexical token
* @returns {Object} selector node
*/
function parseSelector(token) {
function trim(str) {
return str.trim();
}
return astNode(token, {
type: 'rule',
selectors: token.text.split(',').map(trim),
declarations: parseDeclarations(token)
});
}
/**
* Convert a lexical token to an AST node.
*
* @returns {Object|undefined} AST node
*/
function parseToken(token) {
switch (token.type) {
// Cases are listed in roughly descending order of probability.
case 'property': return parseProperty(token);
case 'selector': return parseSelector(token);
case 'at-group-end': _depth = _depth - 1; return;
case 'media' :
case 'keyframes' :return parseAtGroup(token);
case 'comment': if (_comments) { return parseComment(token); } break;
case 'charset': return parseCharset(token);
case 'import': return parseAtImport(token);
case 'namespace': return parseNamespace(token);
case 'font-face':
case 'supports' :
case 'viewport' :
case 'document' :
case 'page' : return parseAtGroup(token);
}
DEBUG && debug('parseToken: unexpected token:', JSON.stringify(token));
}
// -- Parse Helper Functions ---------------------------------------------------
/**
* Iteratively parses lexical tokens from the stack into AST nodes until a
* conditional function returns `false`, at which point iteration terminates
* and any AST nodes collected are returned.
*
* @param {Function} conditionFn
* @param {Object} token the lexical token being parsed
* @returns {Boolean} `true` if the token should be parsed, `false` otherwise
* @return {Array} AST nodes
*/
function parseTokensWhile(conditionFn) {
var node;
var nodes = [];
var token;
while ((token = next()) && (conditionFn && conditionFn(token))) {
node = parseToken(token);
node && nodes.push(node);
}
// Place an unused non-`end` lexical token back onto the stack.
if (token && token.type !== 'end') {
_tokens.unshift(token);
}
return nodes;
}
/**
* Convert a series of tokens into a sequence of declaration AST nodes.
*
* @returns {Array} declaration nodes
*/
function parseDeclarations() {
return parseTokensWhile(function (token) {
return (token.type === 'property' || token.type === 'comment');
});
}
/**
* Convert a series of tokens into a sequence of rule nodes.
*
* @returns {Array} rule nodes
*/
function parseRules() {
return parseTokensWhile(function () { return _depth; });
}

258
book/node_modules/mensch/lib/stringify.js generated vendored Normal file
View File

@@ -0,0 +1,258 @@
var DEBUG = false; // `true` to print debugging info.
var TIMER = false; // `true` to time calls to `stringify()` and print the results.
var debug = require('./debug')('stringify');
var _comments; // Whether comments are allowed in the stringified CSS.
var _compress; // Whether the stringified CSS should be compressed.
var _indentation; // Indentation option value.
var _level; // Current indentation level.
var _n; // Compression-aware newline character.
var _s; // Compression-aware space character.
exports = module.exports = stringify;
/**
* Convert a `stringify`-able AST into a CSS string.
*
* @param {Object} `stringify`-able AST
* @param {Object} [options]
* @param {Boolean} [options.comments=false] allow comments in the CSS
* @param {Boolean} [options.compress=false] compress whitespace
* @param {String} [options.indentation=''] indentation sequence
* @returns {String} CSS
*/
function stringify(ast, options) {
var start; // Debug timer start.
options || (options = {});
_indentation = options.indentation || '';
_compress = !!options.compress;
_comments = !!options.comments;
_level = 1;
if (_compress) {
_n = _s = '';
} else {
_n = '\n';
_s = ' ';
}
TIMER && (start = Date.now());
var css = reduce(ast.stylesheet.rules, stringifyNode).join('\n').trim();
TIMER && debug('ran in', (Date.now() - start) + 'ms');
return css;
}
// -- Functions --------------------------------------------------------------
/**
* Modify the indentation level, or return a compression-aware sequence of
* spaces equal to the current indentation level.
*
* @param {Number} [level=undefined] indentation level modifier
* @returns {String} sequence of spaces
*/
function indent(level) {
if (level) {
_level += level;
return;
}
if (_compress) { return ''; }
return Array(_level).join(_indentation || '');
}
// -- Stringify Functions ------------------------------------------------------
/**
* Stringify an @-rule AST node.
*
* Use `stringifyAtGroup()` when dealing with @-groups that may contain blocks
* such as @media.
*
* @param {String} type @-rule type. E.g., import, charset
* @returns {String} Stringified @-rule
*/
function stringifyAtRule(node) {
return '@' + node.type + ' ' + node.value + ';' + _n;
}
/**
* Stringify an @-group AST node.
*
* Use `stringifyAtRule()` when dealing with @-rules that may not contain blocks
* such as @import.
*
* @param {Object} node @-group AST node
* @returns {String}
*/
function stringifyAtGroup(node) {
var label = '';
var prefix = node.prefix || '';
if (node.name) {
label = ' ' + node.name;
}
// FIXME: @-rule conditional logic is leaking everywhere.
var chomp = node.type !== 'page';
return '@' + prefix + node.type + label + _s + stringifyBlock(node, chomp) + _n;
}
/**
* Stringify a comment AST node.
*
* @param {Object} node comment AST node
* @returns {String}
*/
function stringifyComment(node) {
if (!_comments) { return ''; }
return '/*' + (node.text || '') + '*/' + _n;
}
/**
* Stringify a rule AST node.
*
* @param {Object} node rule AST node
* @returns {String}
*/
function stringifyRule(node) {
var label;
if (node.selectors) {
label = node.selectors.join(',' + _n);
} else {
label = '@' + node.type;
label += node.name ? ' ' + node.name : '';
}
return indent() + label + _s + stringifyBlock(node) + _n;
}
// -- Stringify Helper Functions -----------------------------------------------
/**
* Reduce an array by applying a function to each item and retaining the truthy
* results.
*
* When `item.type` is `'comment'` `stringifyComment` will be applied instead.
*
* @param {Array} items array to reduce
* @param {Function} fn function to call for each item in the array
* @returns {Mixed} Truthy values will be retained, falsy values omitted
* @returns {Array} retained results
*/
function reduce(items, fn) {
return items.reduce(function (results, item) {
var result = (item.type === 'comment') ? stringifyComment(item) : fn(item);
result && results.push(result);
return results;
}, []);
}
/**
* Stringify an AST node with the assumption that it represents a block of
* declarations or other @-group contents.
*
* @param {Object} node AST node
* @returns {String}
*/
// FIXME: chomp should not be a magic boolean parameter
function stringifyBlock(node, chomp) {
var children = node.declarations;
var fn = stringifyDeclaration;
if (node.rules) {
children = node.rules;
fn = stringifyRule;
}
children = stringifyChildren(children, fn);
children && (children = _n + children + (chomp ? '' : _n));
return '{' + children + indent() + '}';
}
/**
* Stringify an array of child AST nodes by calling the given stringify function
* once for each child, and concatenating the results.
*
* @param {Array} children `node.rules` or `node.declarations`
* @param {Function} fn stringify function
* @returns {String}
*/
function stringifyChildren(children, fn) {
if (!children) { return ''; }
indent(1);
var results = reduce(children, fn);
indent(-1);
if (!results.length) { return ''; }
return results.join(_n);
}
/**
* Stringify a declaration AST node.
*
* @param {Object} node declaration AST node
* @returns {String}
*/
function stringifyDeclaration(node) {
if (node.type === 'property') {
return stringifyProperty(node);
}
DEBUG && debug('stringifyDeclaration: unexpected node:', JSON.stringify(node));
}
/**
* Stringify an AST node.
*
* @param {Object} node AST node
* @returns {String}
*/
function stringifyNode(node) {
switch (node.type) {
// Cases are listed in roughly descending order of probability.
case 'rule': return stringifyRule(node);
case 'media' :
case 'keyframes': return stringifyAtGroup(node);
case 'comment': return stringifyComment(node);
case 'import' :
case 'charset' :
case 'namespace': return stringifyAtRule(node);
case 'font-face':
case 'supports' :
case 'viewport' :
case 'document' :
case 'page' : return stringifyAtGroup(node);
}
DEBUG && debug('stringifyNode: unexpected node: ' + JSON.stringify(node));
}
/**
* Stringify an AST property node.
*
* @param {Object} node AST property node
* @returns {String}
*/
function stringifyProperty(node) {
var name = node.name ? node.name + ':' + _s : '';
return indent() + name + node.value + ';';
}

26
book/node_modules/mensch/package.json generated vendored Normal file
View File

@@ -0,0 +1,26 @@
{
"name": "mensch",
"description": "A decent CSS parser",
"version": "0.3.4",
"main": "index.js",
"scripts": {
"test": "make test"
},
"repository": {
"type": "git",
"url": "git://github.com/brettstimmerman/mensch.git"
},
"homepage": "https://github.com/brettstimmerman/mensch",
"keywords": [
"css",
"parser",
"parsing",
"stylesheet"
],
"devDependencies": {
"jshint": "*",
"mocha": "*"
},
"author": "Brett Stimmerman <brettstimmerman@gmail.com>",
"license": "MIT"
}