test
Some checks failed
continuous-integration/drone/push Build is failing

This commit is contained in:
mol
2024-07-06 22:23:31 +08:00
parent 08173d8497
commit 263cb5ef03
1663 changed files with 526884 additions and 0 deletions

View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) Microsoft Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,17 @@
NOTICES AND INFORMATION
Do Not Translate or Localize
This software incorporates material from third parties. Microsoft makes certain
open source code available at https://3rdpartysource.microsoft.com, or you may
send a check or money order for US $5.00, including the product name, the open
source component name, and version number, to:
Source Code Compliance Team
Microsoft Corporation
One Microsoft Way
Redmond, WA 98052
USA
Notwithstanding any other terms, you may reverse engineer this software to the
extent required to debug changes to any libraries licensed under the GNU Lesser
General Public License.

View File

@ -0,0 +1,3 @@
# Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the repository. There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsofts privacy statement. Our privacy statement is located at https://go.microsoft.com/fwlink/?LinkID=824704. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.

View File

@ -0,0 +1,41 @@
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.7 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,46 @@
{
"name": "ms.post",
"version": "3.2.13",
"ext": {
"@gbl.js": {
"file": "ms.post-3.2.13.gbl.js",
"type": "text/javascript; charset=utf-8",
"integrity": "sha256-3nBRm7HRSUI9+pBWas220pRM4zIde/2HZx9UuLc2thc= sha384-gb450pL8z95KgphWiXozxpNgxW6L635+Ugmf515HtNyBwsQQRhuDSdxBQCsOYBBK sha512-ju1j53HRv1+jHc/mNfnVD/hW2a2ojF8IUnmy5xJxeIhAAozVbxwSECHCbsql0JzjBGnRvi3ObU/Mdg/RqblTZQ==",
"hashes": {
"sha256": "3nBRm7HRSUI9+pBWas220pRM4zIde/2HZx9UuLc2thc=",
"sha384": "gb450pL8z95KgphWiXozxpNgxW6L635+Ugmf515HtNyBwsQQRhuDSdxBQCsOYBBK",
"sha512": "ju1j53HRv1+jHc/mNfnVD/hW2a2ojF8IUnmy5xJxeIhAAozVbxwSECHCbsql0JzjBGnRvi3ObU/Mdg/RqblTZQ=="
}
},
"@gbl.min.js": {
"file": "ms.post-3.2.13.gbl.min.js",
"type": "text/javascript; charset=utf-8",
"integrity": "sha256-IfOnOjX1tZZTjg4fxZXM8RY1HsZxh9ZDlfvbD3uYv/M= sha384-+ybkWbjTGX27SgwZi2lTMjfT2wjEOYwwCt6jlWLR/Jn8U0137G4QrDVLVOYw9W2m sha512-rbi7Hj9pCY2qKerfdcon+GAj/TQdVFliS5ewgiC1nmuwfF0TxUZlvGDzOAQchG+n6uivrNfr70a57H5P41EZag==",
"hashes": {
"sha256": "IfOnOjX1tZZTjg4fxZXM8RY1HsZxh9ZDlfvbD3uYv/M=",
"sha384": "+ybkWbjTGX27SgwZi2lTMjfT2wjEOYwwCt6jlWLR/Jn8U0137G4QrDVLVOYw9W2m",
"sha512": "rbi7Hj9pCY2qKerfdcon+GAj/TQdVFliS5ewgiC1nmuwfF0TxUZlvGDzOAQchG+n6uivrNfr70a57H5P41EZag=="
}
},
"@js": {
"file": "ms.post-3.2.13.js",
"type": "text/javascript; charset=utf-8",
"integrity": "sha256-7RwUOASWq7N/vjJcDcXljElPWaOAw3lxEAWY1Et0Sog= sha384-UOZfVU2kkakKoBvfm9/0tuxOMZ4IvZuV4fbTQA/UcRlyvNXELwBeXtOS23291cZH sha512-0RmyCBRtbZu9rLaztbLM5O2BlUhulRRoy5Ghkh8pFnxM9obeLCblZ2l3ycDUoenzZItIhnC9cBJ+S89XocPQNg==",
"hashes": {
"sha256": "7RwUOASWq7N/vjJcDcXljElPWaOAw3lxEAWY1Et0Sog=",
"sha384": "UOZfVU2kkakKoBvfm9/0tuxOMZ4IvZuV4fbTQA/UcRlyvNXELwBeXtOS23291cZH",
"sha512": "0RmyCBRtbZu9rLaztbLM5O2BlUhulRRoy5Ghkh8pFnxM9obeLCblZ2l3ycDUoenzZItIhnC9cBJ+S89XocPQNg=="
}
},
"@min.js": {
"file": "ms.post-3.2.13.min.js",
"type": "text/javascript; charset=utf-8",
"integrity": "sha256-aydMP/5++lC41S8PgoXAj7oXW1vqxgx5Yzq5hqCGRzc= sha384-uOoc91rz4C5nh+RB6LejF5X1EviQatCEYRjg056vKZP8m1RgJU0Zib1DXEsaL7GY sha512-aQRmFdvsYHZdGMDA022aZ3+keOPk0UGqjUhd1GMSYUYKR8NTigIVH3eDffc2BAg+PB9TSwMOGyd/Htz4rFeYnw==",
"hashes": {
"sha256": "aydMP/5++lC41S8PgoXAj7oXW1vqxgx5Yzq5hqCGRzc=",
"sha384": "uOoc91rz4C5nh+RB6LejF5X1EviQatCEYRjg056vKZP8m1RgJU0Zib1DXEsaL7GY",
"sha512": "aQRmFdvsYHZdGMDA022aZ3+keOPk0UGqjUhd1GMSYUYKR8NTigIVH3eDffc2BAg+PB9TSwMOGyd/Htz4rFeYnw=="
}
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,46 @@
{
"name": "ms.post",
"version": "3.2.13",
"ext": {
"@gbl.js": {
"file": "ms.post.gbl.js",
"type": "text/javascript; charset=utf-8",
"integrity": "sha256-4jTrsDjL4+ha4Nt1DTfu2XOsDPrBiX1lSlvMpR800es= sha384-KYAE2hGHDeMNT/gm/6TvBt8vY63SlV+I2WHpE44WThPrvc5UtnljVP+V0LRXV5ET sha512-Sz6A405Jsio3A7pwIs6wORWZpdo/a/QUK08JGGLCArfAmxLbt8ljIhio0iFwqD90Q2hYS53OBApwGg8fOuigUw==",
"hashes": {
"sha256": "4jTrsDjL4+ha4Nt1DTfu2XOsDPrBiX1lSlvMpR800es=",
"sha384": "KYAE2hGHDeMNT/gm/6TvBt8vY63SlV+I2WHpE44WThPrvc5UtnljVP+V0LRXV5ET",
"sha512": "Sz6A405Jsio3A7pwIs6wORWZpdo/a/QUK08JGGLCArfAmxLbt8ljIhio0iFwqD90Q2hYS53OBApwGg8fOuigUw=="
}
},
"@gbl.min.js": {
"file": "ms.post.gbl.min.js",
"type": "text/javascript; charset=utf-8",
"integrity": "sha256-Hz92xBgGENEbzFUmpZRL0fMKddOpW9hyQsFq1DJ/o+s= sha384-hIKluwtRRXnMUFEVokozF50z5UKk+eA8Ox3XoE4MhrhrWLdvTnIu0TKA+wiLz71e sha512-jav2tj02PM+FjVH85S4O54xeVVMN0Y4nk5SuGlORHwXJqpB8rP9IC5DocTvfIaXlOCzfYclBDo3sgOXSBcr1MA==",
"hashes": {
"sha256": "Hz92xBgGENEbzFUmpZRL0fMKddOpW9hyQsFq1DJ/o+s=",
"sha384": "hIKluwtRRXnMUFEVokozF50z5UKk+eA8Ox3XoE4MhrhrWLdvTnIu0TKA+wiLz71e",
"sha512": "jav2tj02PM+FjVH85S4O54xeVVMN0Y4nk5SuGlORHwXJqpB8rP9IC5DocTvfIaXlOCzfYclBDo3sgOXSBcr1MA=="
}
},
"@js": {
"file": "ms.post.js",
"type": "text/javascript; charset=utf-8",
"integrity": "sha256-f3Dsm5rWivCD1z7xHmxNIWUnQVfWGbTFHteXtBfJmVM= sha384-+MFgC8T6kqOdOdsGt8Dlv+WldBxA9OwbASTT/iDqALANt5AxxuHvwEMWmMjZdMqd sha512-yEQ/A+n3z2VHZ6HwSRu6gritTgI+xc/qp+ED8UJ3fSo7RwsbtTTYrfPv8+GEkdjs+EUT46BvkZ2Q/D67fuzM9A==",
"hashes": {
"sha256": "f3Dsm5rWivCD1z7xHmxNIWUnQVfWGbTFHteXtBfJmVM=",
"sha384": "+MFgC8T6kqOdOdsGt8Dlv+WldBxA9OwbASTT/iDqALANt5AxxuHvwEMWmMjZdMqd",
"sha512": "yEQ/A+n3z2VHZ6HwSRu6gritTgI+xc/qp+ED8UJ3fSo7RwsbtTTYrfPv8+GEkdjs+EUT46BvkZ2Q/D67fuzM9A=="
}
},
"@min.js": {
"file": "ms.post.min.js",
"type": "text/javascript; charset=utf-8",
"integrity": "sha256-42XmGxam79aS2As2lKPlARKDohtplUvUd/atcp5io2o= sha384-vfNbqCEHCEi09zMJpiw6/rnXAZv7k8dbDxIBUFjF59q7Sold3SO/jAYbgijq+XSz sha512-RCm54k8NvMjPopH+MNum1jauzh8kNatCy56+1ILv23P29HBjTjrBQDbMvUJBuFnFVpyMnE2yarwWFH2O4u0UtA==",
"hashes": {
"sha256": "42XmGxam79aS2As2lKPlARKDohtplUvUd/atcp5io2o=",
"sha384": "vfNbqCEHCEi09zMJpiw6/rnXAZv7k8dbDxIBUFjF59q7Sold3SO/jAYbgijq+XSz",
"sha512": "RCm54k8NvMjPopH+MNum1jauzh8kNatCy56+1ILv23P29HBjTjrBQDbMvUJBuFnFVpyMnE2yarwWFH2O4u0UtA=="
}
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,6 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
export {};

View File

@ -0,0 +1,91 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
/**
* ClockSkewManager.ts
* @author Abhilash Panwar (abpanwar)
* @copyright Microsoft 2018
*/
import dynamicProto from "@microsoft/dynamicproto-js";
/**
* Class to manage clock skew correction.
*/
var ClockSkewManager = /** @class */ (function () {
function ClockSkewManager() {
var _allowRequestSending = true;
var _shouldAddClockSkewHeaders = true;
var _isFirstRequest = true;
var _clockSkewHeaderValue = "use-collector-delta";
var _clockSkewSet = false;
dynamicProto(ClockSkewManager, this, function (_self) {
/**
* Determine if requests can be sent.
* @returns True if requests can be sent, false otherwise.
*/
_self.allowRequestSending = function () {
return _allowRequestSending;
};
/**
* Tells the ClockSkewManager that it should assume that the first request has now been sent,
* If this method had not yet been called AND the clock Skew had not been set this will set
* allowRequestSending to false until setClockSet() is called.
*/
_self.firstRequestSent = function () {
if (_isFirstRequest) {
_isFirstRequest = false;
if (!_clockSkewSet) {
// Block sending until we get the first clock Skew
_allowRequestSending = false;
}
}
};
/**
* Determine if clock skew headers should be added to the request.
* @returns True if clock skew headers should be added, false otherwise.
*/
_self.shouldAddClockSkewHeaders = function () {
return _shouldAddClockSkewHeaders;
};
/**
* Gets the clock skew header value.
* @returns The clock skew header value.
*/
_self.getClockSkewHeaderValue = function () {
return _clockSkewHeaderValue;
};
/**
* Sets the clock skew header value. Once clock skew is set this method
* is no-op.
* @param timeDeltaInMillis - Time delta to be saved as the clock skew header value.
*/
_self.setClockSkew = function (timeDeltaInMillis) {
if (!_clockSkewSet) {
if (timeDeltaInMillis) {
_clockSkewHeaderValue = timeDeltaInMillis;
_shouldAddClockSkewHeaders = true;
_clockSkewSet = true;
}
else {
_shouldAddClockSkewHeaders = false;
}
// Unblock sending
_allowRequestSending = true;
}
};
});
}
// Removed Stub for ClockSkewManager.prototype.allowRequestSending.
// Removed Stub for ClockSkewManager.prototype.firstRequestSent.
// Removed Stub for ClockSkewManager.prototype.shouldAddClockSkewHeaders.
// Removed Stub for ClockSkewManager.prototype.getClockSkewHeaderValue.
// Removed Stub for ClockSkewManager.prototype.setClockSkew.
// This is a workaround for an IE8 bug when using dynamicProto() with classes that don't have any
// non-dynamic functions or static properties/functions when using uglify-js to minify the resulting code.
// this will be removed when ES3 support is dropped.
ClockSkewManager.__ieDyn=1;
return ClockSkewManager;
}());
export default ClockSkewManager;

View File

@ -0,0 +1,20 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
/**
* Real Time profile (default profile). RealTime Latency events are sent every 1 sec and
* Normal Latency events are sent every 2 sec.
*/
export var RT_PROFILE = "REAL_TIME";
/**
* Near Real Time profile. RealTime Latency events are sent every 3 sec and
* Normal Latency events are sent every 6 sec.
*/
export var NRT_PROFILE = "NEAR_REAL_TIME";
/**
* Best Effort. RealTime Latency events are sent every 9 sec and
* Normal Latency events are sent every 18 sec.
*/
export var BE_PROFILE = "BEST_EFFORT";

View File

@ -0,0 +1,91 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
/**
* EventBatch.ts
* @author Nev Wylie (newylie)
* @copyright Microsoft 2020
*/
import { isNullOrUndefined, isValueAssigned } from "@microsoft/1ds-core-js";
import { STR_EMPTY, STR_MSFPC } from "./InternalConstants";
function _getEventMsfpc(theEvent) {
var intWeb = ((theEvent.ext || {})["intweb"]);
if (intWeb && isValueAssigned(intWeb[STR_MSFPC])) {
return intWeb[STR_MSFPC];
}
return null;
}
function _getMsfpc(theEvents) {
var msfpc = null;
for (var lp = 0; msfpc === null && lp < theEvents.length; lp++) {
msfpc = _getEventMsfpc(theEvents[lp]);
}
return msfpc;
}
/**
* This class defines a "batch" events related to a specific iKey, it is used by the PostChannel and HttpManager
* to collect and transfer ownership of events without duplicating them in-memory. This reduces the previous
* array duplication and shared ownership issues that occurred due to race conditions caused by the async nature
* of sending requests.
*/
var EventBatch = /** @class */ (function () {
/**
* Private constructor so that caller is forced to use the static create method.
* @param iKey - The iKey to associate with the events (not validated)
* @param addEvents - The optional collection of events to assign to this batch - defaults to an empty array.
*/
function EventBatch(iKey, addEvents) {
var events = addEvents ? [].concat(addEvents) : [];
var _self = this;
var _msfpc = _getMsfpc(events);
_self.iKey = function () {
return iKey;
};
_self.Msfpc = function () {
// return the cached value unless it's undefined -- used to avoid cpu
return _msfpc || STR_EMPTY;
};
_self.count = function () {
return events.length;
};
_self.events = function () {
return events;
};
_self.addEvent = function (theEvent) {
if (theEvent) {
events.push(theEvent);
if (!_msfpc) {
// Not found so try and find one
_msfpc = _getEventMsfpc(theEvent);
}
return true;
}
return false;
};
_self.split = function (fromEvent, numEvents) {
// Create a new batch with the same iKey
var theEvents;
if (fromEvent < events.length) {
var cnt = events.length - fromEvent;
if (!isNullOrUndefined(numEvents)) {
cnt = numEvents < cnt ? numEvents : cnt;
}
theEvents = events.splice(fromEvent, cnt);
// reset the fetched msfpc value
_msfpc = _getMsfpc(events);
}
return new EventBatch(iKey, theEvents);
};
}
/**
* Creates a new Event Batch object
* @param iKey The iKey associated with this batch of events
*/
EventBatch.create = function (iKey, theEvents) {
return new EventBatch(iKey, theEvents);
};
return EventBatch;
}());
export { EventBatch };

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,14 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
/**
* @name Index.ts
* @author Abhilash Panwar (abpanwar)
* @copyright Microsoft 2018
* File to export public classes.
*/
import PostChannel from "./PostChannel";
import { BE_PROFILE, NRT_PROFILE, RT_PROFILE, } from "./DataModels";
export { PostChannel, BE_PROFILE, NRT_PROFILE, RT_PROFILE, };

View File

@ -0,0 +1,40 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
// Licensed under the MIT License.
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// Note: DON'T Export these const from the package as we are still targeting ES3 this will export a mutable variables that someone could change!!!
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// Generally you should only put values that are used more than 2 times and then only if not already exposed as a constant (such as SdkCoreNames)
// as when using "short" named values from here they will be will be minified smaller than the SdkCoreNames[eSdkCoreNames.xxxx] value.
export var STR_EMPTY = "";
export var STR_POST_METHOD = "POST";
export var STR_DISABLED_PROPERTY_NAME = "Microsoft_ApplicationInsights_BypassAjaxInstrumentation";
export var STR_DROPPED = "drop";
export var STR_SENDING = "send";
export var STR_REQUEUE = "requeue";
export var STR_RESPONSE_FAIL = "rspFail";
export var STR_OTHER = "oth";
export var DEFAULT_CACHE_CONTROL = "no-cache, no-store";
export var DEFAULT_CONTENT_TYPE = "application/x-json-stream";
export var STR_CACHE_CONTROL = "cache-control";
export var STR_CONTENT_TYPE_HEADER = "content-type";
export var STR_KILL_TOKENS_HEADER = "kill-tokens";
export var STR_KILL_DURATION_HEADER = "kill-duration";
export var STR_KILL_DURATION_SECONDS_HEADER = "kill-duration-seconds";
export var STR_TIME_DELTA_HEADER = "time-delta-millis";
export var STR_CLIENT_VERSION = "client-version";
export var STR_CLIENT_ID = "client-id";
export var STR_TIME_DELTA_TO_APPLY = "time-delta-to-apply-millis";
export var STR_UPLOAD_TIME = "upload-time";
export var STR_API_KEY = "apikey";
export var STR_MSA_DEVICE_TICKET = "AuthMsaDeviceTicket";
export var STR_AUTH_XTOKEN = "AuthXToken";
export var STR_SDK_VERSION = "sdk-version";
export var STR_NO_RESPONSE_BODY = "NoResponseBody";
export var STR_MSFPC = "msfpc";
export var STR_TRACE = "trace";
export var STR_USER = "user";

View File

@ -0,0 +1,68 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
/**
* KillSwitch.ts
* @author Abhilash Panwar (abpanwar)
* @copyright Microsoft 2018
*/
import dynamicProto from "@microsoft/dynamicproto-js";
import { arrForEach, dateNow, strTrim } from "@microsoft/1ds-core-js";
var SecToMsMultiplier = 1000;
/**
* Class to stop certain tenants sending events.
*/
var KillSwitch = /** @class */ (function () {
function KillSwitch() {
var _killedTokenDictionary = {};
function _normalizeTenants(values) {
var result = [];
if (values) {
arrForEach(values, function (value) {
result.push(strTrim(value));
});
}
return result;
}
dynamicProto(KillSwitch, this, function (_self) {
_self.setKillSwitchTenants = function (killTokens, killDuration) {
if (killTokens && killDuration) {
try {
var killedTokens = _normalizeTenants(killTokens.split(","));
if (killDuration === "this-request-only") {
return killedTokens;
}
var durationMs = parseInt(killDuration, 10) * SecToMsMultiplier;
for (var i = 0; i < killedTokens.length; ++i) {
_killedTokenDictionary[killedTokens[i]] = dateNow() + durationMs;
}
}
catch (ex) {
return [];
}
}
return [];
};
_self.isTenantKilled = function (tenantToken) {
var killDictionary = _killedTokenDictionary;
var name = strTrim(tenantToken);
if (killDictionary[name] !== undefined && killDictionary[name] > dateNow()) {
return true;
}
delete killDictionary[name];
return false;
};
});
}
// Removed Stub for KillSwitch.prototype.setKillSwitchTenants.
// Removed Stub for KillSwitch.prototype.isTenantKilled.
// This is a workaround for an IE8 bug when using dynamicProto() with classes that don't have any
// non-dynamic functions or static properties/functions when using uglify-js to minify the resulting code.
// this will be removed when ES3 support is dropped.
KillSwitch.__ieDyn=1;
return KillSwitch;
}());
export default KillSwitch;

View File

@ -0,0 +1,912 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
import { __extendsFn as __extends } from "@microsoft/applicationinsights-shims";
/**
* PostManager.ts
* @author Abhilash Panwar (abpanwar); Hector Hernandez (hectorh); Nev Wylie (newylie)
* @copyright Microsoft 2018-2020
*/
import dynamicProto from "@microsoft/dynamicproto-js";
import { BaseTelemetryPlugin, EventsDiscardedReason, _throwInternal, addPageHideEventListener, addPageShowEventListener, addPageUnloadEventListener, arrForEach, createUniqueNamespace, doPerf, getWindow, isChromium, isNumber, isValueAssigned, mergeEvtNamespace, objDefineAccessors, objForEachKey, optimizeObject, removePageHideEventListener, removePageShowEventListener, removePageUnloadEventListener, setProcessTelemetryTimings } from "@microsoft/1ds-core-js";
import { BE_PROFILE, NRT_PROFILE, RT_PROFILE } from "./DataModels";
import { EventBatch } from "./EventBatch";
import { HttpManager } from "./HttpManager";
import { STR_MSA_DEVICE_TICKET, STR_TRACE, STR_USER } from "./InternalConstants";
import { retryPolicyGetMillisToBackoffForRetry } from "./RetryPolicy";
import { createTimeoutWrapper } from "./TimeoutOverrideWrapper";
var FlushCheckTimer = 0.250; // This needs to be in seconds, so this is 250ms
var MaxNumberEventPerBatch = 500;
var EventsDroppedAtOneTime = 20;
var MaxSendAttempts = 6;
var MaxSyncUnloadSendAttempts = 2; // Assuming 2 based on beforeunload and unload
var MaxBackoffCount = 4;
var MaxConnections = 2;
var MaxRequestRetriesBeforeBackoff = 1;
var strEventsDiscarded = "eventsDiscarded";
var strOverrideInstrumentationKey = "overrideInstrumentationKey";
var strMaxEventRetryAttempts = "maxEventRetryAttempts";
var strMaxUnloadEventRetryAttempts = "maxUnloadEventRetryAttempts";
var strAddUnloadCb = "addUnloadCb";
/**
* Class that manages adding events to inbound queues and batching of events
* into requests.
*/
var PostChannel = /** @class */ (function (_super) {
__extends(PostChannel, _super);
function PostChannel() {
var _this = _super.call(this) || this;
_this.identifier = "PostChannel";
_this.priority = 1011;
_this.version = '3.2.13';
var _config;
var _isTeardownCalled = false;
var _flushCallbackQueue = [];
var _flushCallbackTimerId = null;
var _paused = false;
var _immediateQueueSize = 0;
var _immediateQueueSizeLimit = 500;
var _queueSize = 0;
var _queueSizeLimit = 10000;
var _profiles = {};
var _currentProfile = RT_PROFILE;
var _scheduledTimerId = null;
var _immediateTimerId = null;
var _currentBackoffCount = 0;
var _timerCount = 0;
var _xhrOverride;
var _httpManager;
var _batchQueues = {};
var _autoFlushEventsLimit;
// either MaxBatchSize * (1+ Max Connections) or _queueLimit / 6 (where 3 latency Queues [normal, realtime, cost deferred] * 2 [allow half full -- allow for retry])
var _autoFlushBatchLimit;
var _delayedBatchSendLatency = -1;
var _delayedBatchReason;
var _optimizeObject = true;
var _isPageUnloadTriggered = false;
var _maxEventSendAttempts = MaxSendAttempts;
var _maxUnloadEventSendAttempts = MaxSyncUnloadSendAttempts;
var _evtNamespace;
var _timeoutWrapper;
dynamicProto(PostChannel, _this, function (_self, _base) {
_initDefaults();
// Special internal method to allow the DebugPlugin to hook embedded objects
_self["_getDbgPlgTargets"] = function () {
return [_httpManager];
};
_self.initialize = function (coreConfig, core, extensions) {
doPerf(core, function () { return "PostChannel:initialize"; }, function () {
var extendedCore = core;
_base.initialize(coreConfig, core, extensions);
try {
var hasAddUnloadCb = !!core[strAddUnloadCb];
_evtNamespace = mergeEvtNamespace(createUniqueNamespace(_self.identifier), core.evtNamespace && core.evtNamespace());
var ctx = _self._getTelCtx();
coreConfig.extensionConfig[_self.identifier] = coreConfig.extensionConfig[_self.identifier] || {};
_config = ctx.getExtCfg(_self.identifier);
_timeoutWrapper = createTimeoutWrapper(_config.setTimeoutOverride, _config.clearTimeoutOverride);
// Only try and use the optimizeObject() if this appears to be a chromium based browser and it has not been explicitly disabled
_optimizeObject = !_config.disableOptimizeObj && isChromium();
_hookWParam(extendedCore);
if (_config.eventsLimitInMem > 0) {
_queueSizeLimit = _config.eventsLimitInMem;
}
if (_config.immediateEventLimit > 0) {
_immediateQueueSizeLimit = _config.immediateEventLimit;
}
if (_config.autoFlushEventsLimit > 0) {
_autoFlushEventsLimit = _config.autoFlushEventsLimit;
}
if (isNumber(_config[strMaxEventRetryAttempts])) {
_maxEventSendAttempts = _config[strMaxEventRetryAttempts];
}
if (isNumber(_config[strMaxUnloadEventRetryAttempts])) {
_maxUnloadEventSendAttempts = _config[strMaxUnloadEventRetryAttempts];
}
_setAutoLimits();
if (_config.httpXHROverride && _config.httpXHROverride.sendPOST) {
_xhrOverride = _config.httpXHROverride;
}
if (isValueAssigned(coreConfig.anonCookieName)) {
_httpManager.addQueryStringParameter("anoncknm", coreConfig.anonCookieName);
}
_httpManager.sendHook = _config.payloadPreprocessor;
_httpManager.sendListener = _config.payloadListener;
// Override endpointUrl if provided in Post config
var endpointUrl = _config.overrideEndpointUrl ? _config.overrideEndpointUrl : coreConfig.endpointUrl;
_self._notificationManager = core.getNotifyMgr();
_httpManager.initialize(endpointUrl, _self.core, _self, _xhrOverride, _config);
var excludePageUnloadEvents = coreConfig.disablePageUnloadEvents || [];
// When running in Web browsers try to send all telemetry if page is unloaded
addPageUnloadEventListener(_handleUnloadEvents, excludePageUnloadEvents, _evtNamespace);
addPageHideEventListener(_handleUnloadEvents, excludePageUnloadEvents, _evtNamespace);
addPageShowEventListener(_handleShowEvents, coreConfig.disablePageShowEvents, _evtNamespace);
}
catch (e) {
// resetting the initialized state because of failure
_self.setInitialized(false);
throw e;
}
}, function () { return ({ coreConfig: coreConfig, core: core, extensions: extensions }); });
};
_self.processTelemetry = function (ev, itemCtx) {
setProcessTelemetryTimings(ev, _self.identifier);
itemCtx = _self._getTelCtx(itemCtx);
// Get the channel instance from the current request/instance
var channelConfig = itemCtx.getExtCfg(_self.identifier);
// DisableTelemetry was defined in the config provided during initialization
var disableTelemetry = !!_config.disableTelemetry;
if (channelConfig) {
// DisableTelemetry is defined in the config for this request/instance
disableTelemetry = disableTelemetry || !!channelConfig.disableTelemetry;
}
var event = ev;
if (!disableTelemetry && !_isTeardownCalled) {
// Override iKey if provided in Post config if provided for during initialization
if (_config[strOverrideInstrumentationKey]) {
event.iKey = _config[strOverrideInstrumentationKey];
}
// Override iKey if provided in Post config if provided for this instance
if (channelConfig && channelConfig[strOverrideInstrumentationKey]) {
event.iKey = channelConfig[strOverrideInstrumentationKey];
}
_addEventToQueues(event, true);
if (_isPageUnloadTriggered) {
// Unload event has been received so we need to try and flush new events
_releaseAllQueues(2 /* EventSendType.SendBeacon */, 2 /* SendRequestReason.Unload */);
}
else {
_scheduleTimer();
}
}
_self.processNext(event, itemCtx);
};
_self._doTeardown = function (unloadCtx, unloadState) {
_releaseAllQueues(2 /* EventSendType.SendBeacon */, 2 /* SendRequestReason.Unload */);
_isTeardownCalled = true;
_httpManager.teardown();
removePageUnloadEventListener(null, _evtNamespace);
removePageHideEventListener(null, _evtNamespace);
removePageShowEventListener(null, _evtNamespace);
// Just register to remove all events associated with this namespace
_initDefaults();
};
function _hookWParam(extendedCore) {
var existingGetWParamMethod = extendedCore.getWParam;
extendedCore.getWParam = function () {
var wparam = 0;
if (_config.ignoreMc1Ms0CookieProcessing) {
wparam = wparam | 2;
}
return wparam | existingGetWParamMethod();
};
}
// Moving event handlers out from the initialize closure so that any local variables can be garbage collected
function _handleUnloadEvents(evt) {
var theEvt = evt || getWindow().event; // IE 8 does not pass the event
if (theEvt.type !== "beforeunload") {
// Only set the unload trigger if not beforeunload event as beforeunload can be cancelled while the other events can't
_isPageUnloadTriggered = true;
_httpManager.setUnloading(_isPageUnloadTriggered);
}
_releaseAllQueues(2 /* EventSendType.SendBeacon */, 2 /* SendRequestReason.Unload */);
}
function _handleShowEvents(evt) {
// Handle the page becoming visible again
_isPageUnloadTriggered = false;
_httpManager.setUnloading(_isPageUnloadTriggered);
}
function _addEventToQueues(event, append) {
// If send attempt field is undefined we should set it to 0.
if (!event.sendAttempt) {
event.sendAttempt = 0;
}
// Add default latency
if (!event.latency) {
event.latency = 1 /* EventLatencyValue.Normal */;
}
// Remove extra AI properties if present
if (event.ext && event.ext[STR_TRACE]) {
delete (event.ext[STR_TRACE]);
}
if (event.ext && event.ext[STR_USER] && event.ext[STR_USER]["id"]) {
delete (event.ext[STR_USER]["id"]);
}
// v8 performance optimization for iterating over the keys
if (_optimizeObject) {
setProcessTelemetryTimings;
event.ext = optimizeObject(event.ext);
if (event.baseData) {
event.baseData = optimizeObject(event.baseData);
}
if (event.data) {
event.data = optimizeObject(event.data);
}
}
if (event.sync) {
// If the transmission is backed off then do not send synchronous events.
// We will convert these events to Real time latency instead.
if (_currentBackoffCount || _paused) {
event.latency = 3 /* EventLatencyValue.RealTime */;
event.sync = false;
}
else {
// Log the event synchronously
if (_httpManager) {
// v8 performance optimization for iterating over the keys
if (_optimizeObject) {
event = optimizeObject(event);
}
_httpManager.sendSynchronousBatch(EventBatch.create(event.iKey, [event]), event.sync === true ? 1 /* EventSendType.Synchronous */ : event.sync, 3 /* SendRequestReason.SyncEvent */);
return;
}
}
}
var evtLatency = event.latency;
var queueSize = _queueSize;
var queueLimit = _queueSizeLimit;
if (evtLatency === 4 /* EventLatencyValue.Immediate */) {
queueSize = _immediateQueueSize;
queueLimit = _immediateQueueSizeLimit;
}
var eventDropped = false;
// Only add the event if the queue isn't full or it's a direct event (which don't add to the queue sizes)
if (queueSize < queueLimit) {
eventDropped = !_addEventToProperQueue(event, append);
}
else {
var dropLatency = 1 /* EventLatencyValue.Normal */;
var dropNumber = EventsDroppedAtOneTime;
if (evtLatency === 4 /* EventLatencyValue.Immediate */) {
// Only drop other immediate events as they are not technically sharing the general queue
dropLatency = 4 /* EventLatencyValue.Immediate */;
dropNumber = 1;
}
// Drop old event from lower or equal latency
eventDropped = true;
if (_dropEventWithLatencyOrLess(event.iKey, event.latency, dropLatency, dropNumber)) {
eventDropped = !_addEventToProperQueue(event, append);
}
}
if (eventDropped) {
// Can't drop events from current queues because the all the slots are taken by queues that are being flushed.
_notifyEvents(strEventsDiscarded, [event], EventsDiscardedReason.QueueFull);
}
}
_self.setEventQueueLimits = function (eventLimit, autoFlushLimit) {
_queueSizeLimit = eventLimit > 0 ? eventLimit : 10000;
_autoFlushEventsLimit = autoFlushLimit > 0 ? autoFlushLimit : 0;
_setAutoLimits();
// We only do this check here as during normal event addition if the queue is > then events start getting dropped
var doFlush = _queueSize > eventLimit;
if (!doFlush && _autoFlushBatchLimit > 0) {
// Check the auto flush max batch size
for (var latency = 1 /* EventLatencyValue.Normal */; !doFlush && latency <= 3 /* EventLatencyValue.RealTime */; latency++) {
var batchQueue = _batchQueues[latency];
if (batchQueue && batchQueue.batches) {
arrForEach(batchQueue.batches, function (theBatch) {
if (theBatch && theBatch.count() >= _autoFlushBatchLimit) {
// If any 1 batch is > than the limit then trigger an auto flush
doFlush = true;
}
});
}
}
}
_performAutoFlush(true, doFlush);
};
_self.pause = function () {
_clearScheduledTimer();
_paused = true;
_httpManager.pause();
};
_self.resume = function () {
_paused = false;
_httpManager.resume();
_scheduleTimer();
};
_self.addResponseHandler = function (responseHandler) {
_httpManager._responseHandlers.push(responseHandler);
};
_self._loadTransmitProfiles = function (profiles) {
_resetTransmitProfiles();
objForEachKey(profiles, function (profileName, profileValue) {
var profLen = profileValue.length;
if (profLen >= 2) {
var directValue = (profLen > 2 ? profileValue[2] : 0);
profileValue.splice(0, profLen - 2);
// Make sure if a higher latency is set to not send then don't send lower latency
if (profileValue[1] < 0) {
profileValue[0] = -1;
}
// Make sure each latency is multiple of the latency higher then it. If not a multiple
// we round up so that it becomes a multiple.
if (profileValue[1] > 0 && profileValue[0] > 0) {
var timerMultiplier = profileValue[0] / profileValue[1];
profileValue[0] = Math.ceil(timerMultiplier) * profileValue[1];
}
// Add back the direct profile timeout
if (directValue >= 0 && profileValue[1] >= 0 && directValue > profileValue[1]) {
// Make sure if it's not disabled (< 0) then make sure it's not larger than RealTime
directValue = profileValue[1];
}
profileValue.push(directValue);
_profiles[profileName] = profileValue;
}
});
};
_self.flush = function (async, callback, sendReason) {
if (async === void 0) { async = true; }
if (!_paused) {
sendReason = sendReason || 1 /* SendRequestReason.ManualFlush */;
if (async) {
if (_flushCallbackTimerId == null) {
// Clear the normal schedule timer as we are going to try and flush ASAP
_clearScheduledTimer();
// Move all queued events to the HttpManager so that we don't discard new events (Auto flush scenario)
_queueBatches(1 /* EventLatencyValue.Normal */, 0 /* EventSendType.Batched */, sendReason);
_flushCallbackTimerId = _createTimer(function () {
_flushCallbackTimerId = null;
_flushImpl(callback, sendReason);
}, 0);
}
else {
// Even if null (no callback) this will ensure after the flushImpl finishes waiting
// for a completely idle connection it will attempt to re-flush any queued events on the next cycle
_flushCallbackQueue.push(callback);
}
}
else {
// Clear the normal schedule timer as we are going to try and flush ASAP
var cleared = _clearScheduledTimer();
// Now cause all queued events to be sent synchronously
_sendEventsForLatencyAndAbove(1 /* EventLatencyValue.Normal */, 1 /* EventSendType.Synchronous */, sendReason);
if (callback !== null && callback !== undefined) {
callback();
}
if (cleared) {
// restart the normal event timer if it was cleared
_scheduleTimer();
}
}
}
};
_self.setMsaAuthTicket = function (ticket) {
_httpManager.addHeader(STR_MSA_DEVICE_TICKET, ticket);
};
_self.hasEvents = _hasEvents;
_self._setTransmitProfile = function (profileName) {
if (_currentProfile !== profileName && _profiles[profileName] !== undefined) {
_clearScheduledTimer();
_currentProfile = profileName;
_scheduleTimer();
}
};
/**
* Batch and send events currently in the queue for the given latency.
* @param latency - Latency for which to send events.
*/
function _sendEventsForLatencyAndAbove(latency, sendType, sendReason) {
var queued = _queueBatches(latency, sendType, sendReason);
// Always trigger the request as while the post channel may not have queued additional events, the httpManager may already have waiting events
_httpManager.sendQueuedRequests(sendType, sendReason);
return queued;
}
function _hasEvents() {
return _queueSize > 0;
}
/**
* Try to schedule the timer after which events will be sent. If there are
* no events to be sent, or there is already a timer scheduled, or the
* http manager doesn't have any idle connections this method is no-op.
*/
function _scheduleTimer() {
// If we had previously attempted to send requests, but the http manager didn't have any idle connections then the requests where delayed
// so try and requeue then again now
if (_delayedBatchSendLatency >= 0 && _queueBatches(_delayedBatchSendLatency, 0 /* EventSendType.Batched */, _delayedBatchReason)) {
_httpManager.sendQueuedRequests(0 /* EventSendType.Batched */, _delayedBatchReason);
}
if (_immediateQueueSize > 0 && !_immediateTimerId && !_paused) {
// During initialization _profiles enforce that the direct [2] is less than real time [1] timer value
// If the immediateTimeout is disabled the immediate events will be sent with Real Time events
var immediateTimeOut = _profiles[_currentProfile][2];
if (immediateTimeOut >= 0) {
_immediateTimerId = _createTimer(function () {
_immediateTimerId = null;
// Only try to send direct events
_sendEventsForLatencyAndAbove(4 /* EventLatencyValue.Immediate */, 0 /* EventSendType.Batched */, 1 /* SendRequestReason.NormalSchedule */);
_scheduleTimer();
}, immediateTimeOut);
}
}
// During initialization the _profiles enforce that the normal [0] is a multiple of the real time [1] timer value
var timeOut = _profiles[_currentProfile][1];
if (!_scheduledTimerId && !_flushCallbackTimerId && timeOut >= 0 && !_paused) {
if (_hasEvents()) {
_scheduledTimerId = _createTimer(function () {
_scheduledTimerId = null;
_sendEventsForLatencyAndAbove(_timerCount === 0 ? 3 /* EventLatencyValue.RealTime */ : 1 /* EventLatencyValue.Normal */, 0 /* EventSendType.Batched */, 1 /* SendRequestReason.NormalSchedule */);
// Increment the count for next cycle
_timerCount++;
_timerCount %= 2;
_scheduleTimer();
}, timeOut);
}
else {
_timerCount = 0;
}
}
}
_self._backOffTransmission = function () {
if (_currentBackoffCount < MaxBackoffCount) {
_currentBackoffCount++;
_clearScheduledTimer();
_scheduleTimer();
}
};
_self._clearBackOff = function () {
if (_currentBackoffCount) {
_currentBackoffCount = 0;
_clearScheduledTimer();
_scheduleTimer();
}
};
function _initDefaults() {
_config = null;
_isTeardownCalled = false;
_flushCallbackQueue = [];
_flushCallbackTimerId = null;
_paused = false;
_immediateQueueSize = 0;
_immediateQueueSizeLimit = 500;
_queueSize = 0;
_queueSizeLimit = 10000;
_profiles = {};
_currentProfile = RT_PROFILE;
_scheduledTimerId = null;
_immediateTimerId = null;
_currentBackoffCount = 0;
_timerCount = 0;
_xhrOverride = null;
_batchQueues = {};
_autoFlushEventsLimit = undefined;
// either MaxBatchSize * (1+ Max Connections) or _queueLimit / 6 (where 3 latency Queues [normal, realtime, cost deferred] * 2 [allow half full -- allow for retry])
_autoFlushBatchLimit = 0;
_delayedBatchSendLatency = -1;
_delayedBatchReason = null;
_optimizeObject = true;
_isPageUnloadTriggered = false;
_maxEventSendAttempts = MaxSendAttempts;
_maxUnloadEventSendAttempts = MaxSyncUnloadSendAttempts;
_evtNamespace = null;
_timeoutWrapper = createTimeoutWrapper();
_httpManager = new HttpManager(MaxNumberEventPerBatch, MaxConnections, MaxRequestRetriesBeforeBackoff, {
requeue: _requeueEvents,
send: _sendingEvent,
sent: _eventsSentEvent,
drop: _eventsDropped,
rspFail: _eventsResponseFail,
oth: _otherEvent
}, _timeoutWrapper);
_initializeProfiles();
_clearQueues();
_setAutoLimits();
}
function _createTimer(theTimerFunc, timeOut) {
// If the transmission is backed off make the timer at least 1 sec to allow for back off.
if (timeOut === 0 && _currentBackoffCount) {
timeOut = 1;
}
var timerMultiplier = 1000;
if (_currentBackoffCount) {
timerMultiplier = retryPolicyGetMillisToBackoffForRetry(_currentBackoffCount - 1);
}
return _timeoutWrapper.set(theTimerFunc, timeOut * timerMultiplier);
}
function _clearScheduledTimer() {
if (_scheduledTimerId !== null) {
_timeoutWrapper.clear(_scheduledTimerId);
_scheduledTimerId = null;
_timerCount = 0;
return true;
}
return false;
}
// Try to send all queued events using beacons if available
function _releaseAllQueues(sendType, sendReason) {
_clearScheduledTimer();
// Cancel all flush callbacks
if (_flushCallbackTimerId) {
_timeoutWrapper.clear(_flushCallbackTimerId);
_flushCallbackTimerId = null;
}
if (!_paused) {
// Queue all the remaining requests to be sent. The requests will be sent using HTML5 Beacons if they are available.
_sendEventsForLatencyAndAbove(1 /* EventLatencyValue.Normal */, sendType, sendReason);
}
}
/**
* Add empty queues for all latencies in the inbound queues map. This is called
* when Transmission Manager is being flushed. This ensures that new events added
* after flush are stored separately till we flush the current events.
*/
function _clearQueues() {
_batchQueues[4 /* EventLatencyValue.Immediate */] = {
batches: [],
iKeyMap: {}
};
_batchQueues[3 /* EventLatencyValue.RealTime */] = {
batches: [],
iKeyMap: {}
};
_batchQueues[2 /* EventLatencyValue.CostDeferred */] = {
batches: [],
iKeyMap: {}
};
_batchQueues[1 /* EventLatencyValue.Normal */] = {
batches: [],
iKeyMap: {}
};
}
function _getEventBatch(iKey, latency, create) {
var batchQueue = _batchQueues[latency];
if (!batchQueue) {
latency = 1 /* EventLatencyValue.Normal */;
batchQueue = _batchQueues[latency];
}
var eventBatch = batchQueue.iKeyMap[iKey];
if (!eventBatch && create) {
eventBatch = EventBatch.create(iKey);
batchQueue.batches.push(eventBatch);
batchQueue.iKeyMap[iKey] = eventBatch;
}
return eventBatch;
}
function _performAutoFlush(isAsync, doFlush) {
// Only perform the auto flush check if the httpManager has an idle connection and we are not in a backoff situation
if (_httpManager.canSendRequest() && !_currentBackoffCount) {
if (_autoFlushEventsLimit > 0 && _queueSize > _autoFlushEventsLimit) {
// Force flushing
doFlush = true;
}
if (doFlush && _flushCallbackTimerId == null) {
// Auto flush the queue
_self.flush(isAsync, null, 20 /* SendRequestReason.MaxQueuedEvents */);
}
}
}
function _addEventToProperQueue(event, append) {
// v8 performance optimization for iterating over the keys
if (_optimizeObject) {
event = optimizeObject(event);
}
var latency = event.latency;
var eventBatch = _getEventBatch(event.iKey, latency, true);
if (eventBatch.addEvent(event)) {
if (latency !== 4 /* EventLatencyValue.Immediate */) {
_queueSize++;
// Check for auto flushing based on total events in the queue, but not for requeued or retry events
if (append && event.sendAttempt === 0) {
// Force the flushing of the batch if the batch (specific iKey / latency combination) reaches it's auto flush limit
_performAutoFlush(!event.sync, _autoFlushBatchLimit > 0 && eventBatch.count() >= _autoFlushBatchLimit);
}
}
else {
// Direct events don't need auto flushing as they are scheduled (by default) for immediate delivery
_immediateQueueSize++;
}
return true;
}
return false;
}
function _dropEventWithLatencyOrLess(iKey, latency, currentLatency, dropNumber) {
while (currentLatency <= latency) {
var eventBatch = _getEventBatch(iKey, latency, true);
if (eventBatch && eventBatch.count() > 0) {
// Dropped oldest events from lowest possible latency
var droppedEvents = eventBatch.split(0, dropNumber);
var droppedCount = droppedEvents.count();
if (droppedCount > 0) {
if (currentLatency === 4 /* EventLatencyValue.Immediate */) {
_immediateQueueSize -= droppedCount;
}
else {
_queueSize -= droppedCount;
}
_notifyBatchEvents(strEventsDiscarded, [droppedEvents], EventsDiscardedReason.QueueFull);
return true;
}
}
currentLatency++;
}
// Unable to drop any events -- lets just make sure the queue counts are correct to avoid exhaustion
_resetQueueCounts();
return false;
}
/**
* Internal helper to reset the queue counts, used as a backstop to avoid future queue exhaustion errors
* that might occur because of counting issues.
*/
function _resetQueueCounts() {
var immediateQueue = 0;
var normalQueue = 0;
var _loop_1 = function (latency) {
var batchQueue = _batchQueues[latency];
if (batchQueue && batchQueue.batches) {
arrForEach(batchQueue.batches, function (theBatch) {
if (latency === 4 /* EventLatencyValue.Immediate */) {
immediateQueue += theBatch.count();
}
else {
normalQueue += theBatch.count();
}
});
}
};
for (var latency = 1 /* EventLatencyValue.Normal */; latency <= 4 /* EventLatencyValue.Immediate */; latency++) {
_loop_1(latency);
}
_queueSize = normalQueue;
_immediateQueueSize = immediateQueue;
}
function _queueBatches(latency, sendType, sendReason) {
var eventsQueued = false;
var isAsync = sendType === 0 /* EventSendType.Batched */;
// Only queue batches (to the HttpManager) if this is a sync request or the httpManager has an idle connection
// Thus keeping the events within the PostChannel until the HttpManager has a connection available
// This is so we can drop "old" events if the queue is getting full because we can't successfully send events
if (!isAsync || _httpManager.canSendRequest()) {
doPerf(_self.core, function () { return "PostChannel._queueBatches"; }, function () {
var droppedEvents = [];
var latencyToProcess = 4 /* EventLatencyValue.Immediate */;
while (latencyToProcess >= latency) {
var batchQueue = _batchQueues[latencyToProcess];
if (batchQueue && batchQueue.batches && batchQueue.batches.length > 0) {
arrForEach(batchQueue.batches, function (theBatch) {
// Add the batch to the http manager to send the requests
if (!_httpManager.addBatch(theBatch)) {
// The events from this iKey are being dropped (killed)
droppedEvents = droppedEvents.concat(theBatch.events());
}
else {
eventsQueued = eventsQueued || (theBatch && theBatch.count() > 0);
}
if (latencyToProcess === 4 /* EventLatencyValue.Immediate */) {
_immediateQueueSize -= theBatch.count();
}
else {
_queueSize -= theBatch.count();
}
});
// Remove all batches from this Queue
batchQueue.batches = [];
batchQueue.iKeyMap = {};
}
latencyToProcess--;
}
if (droppedEvents.length > 0) {
_notifyEvents(strEventsDiscarded, droppedEvents, EventsDiscardedReason.KillSwitch);
}
if (eventsQueued && _delayedBatchSendLatency >= latency) {
// We have queued events at the same level as the delayed values so clear the setting
_delayedBatchSendLatency = -1;
_delayedBatchReason = 0 /* SendRequestReason.Undefined */;
}
}, function () { return ({ latency: latency, sendType: sendType, sendReason: sendReason }); }, !isAsync);
}
else {
// remember the min latency so that we can re-trigger later
_delayedBatchSendLatency = _delayedBatchSendLatency >= 0 ? Math.min(_delayedBatchSendLatency, latency) : latency;
_delayedBatchReason = Math.max(_delayedBatchReason, sendReason);
}
return eventsQueued;
}
/**
* This is the callback method is called as part of the manual flushing process.
* @param callback
* @param sendReason
*/
function _flushImpl(callback, sendReason) {
// Add any additional queued events and cause all queued events to be sent asynchronously
_sendEventsForLatencyAndAbove(1 /* EventLatencyValue.Normal */, 0 /* EventSendType.Batched */, sendReason);
// All events (should) have been queue -- lets just make sure the queue counts are correct to avoid queue exhaustion (previous bug #9685112)
_resetQueueCounts();
_waitForIdleManager(function () {
// Only called AFTER the httpManager does not have any outstanding requests
if (callback) {
callback();
}
if (_flushCallbackQueue.length > 0) {
_flushCallbackTimerId = _createTimer(function () {
_flushCallbackTimerId = null;
_flushImpl(_flushCallbackQueue.shift(), sendReason);
}, 0);
}
else {
// No more flush requests
_flushCallbackTimerId = null;
// Restart the normal timer schedule
_scheduleTimer();
}
});
}
function _waitForIdleManager(callback) {
if (_httpManager.isCompletelyIdle()) {
callback();
}
else {
_flushCallbackTimerId = _createTimer(function () {
_flushCallbackTimerId = null;
_waitForIdleManager(callback);
}, FlushCheckTimer);
}
}
/**
* Resets the transmit profiles to the default profiles of Real Time, Near Real Time
* and Best Effort. This removes all the custom profiles that were loaded.
*/
function _resetTransmitProfiles() {
_clearScheduledTimer();
_initializeProfiles();
_currentProfile = RT_PROFILE;
_scheduleTimer();
}
function _initializeProfiles() {
_profiles = {};
_profiles[RT_PROFILE] = [2, 1, 0];
_profiles[NRT_PROFILE] = [6, 3, 0];
_profiles[BE_PROFILE] = [18, 9, 0];
}
/**
* The notification handler for requeue events
* @ignore
*/
function _requeueEvents(batches, reason) {
var droppedEvents = [];
var maxSendAttempts = _maxEventSendAttempts;
if (_isPageUnloadTriggered) {
// If a page unlaod has been triggered reduce the number of times we try to "retry"
maxSendAttempts = _maxUnloadEventSendAttempts;
}
arrForEach(batches, function (theBatch) {
if (theBatch && theBatch.count() > 0) {
arrForEach(theBatch.events(), function (theEvent) {
if (theEvent) {
// Check if the request being added back is for a sync event in which case mark it no longer a sync event
if (theEvent.sync) {
theEvent.latency = 4 /* EventLatencyValue.Immediate */;
theEvent.sync = false;
}
if (theEvent.sendAttempt < maxSendAttempts) {
// Reset the event timings
setProcessTelemetryTimings(theEvent, _self.identifier);
_addEventToQueues(theEvent, false);
}
else {
droppedEvents.push(theEvent);
}
}
});
}
});
if (droppedEvents.length > 0) {
_notifyEvents(strEventsDiscarded, droppedEvents, EventsDiscardedReason.NonRetryableStatus);
}
if (_isPageUnloadTriggered) {
// Unload event has been received so we need to try and flush new events
_releaseAllQueues(2 /* EventSendType.SendBeacon */, 2 /* SendRequestReason.Unload */);
}
}
function _callNotification(evtName, theArgs) {
var manager = (_self._notificationManager || {});
var notifyFunc = manager[evtName];
if (notifyFunc) {
try {
notifyFunc.apply(manager, theArgs);
}
catch (e) {
_throwInternal(_self.diagLog(), 1 /* eLoggingSeverity.CRITICAL */, 74 /* _eInternalMessageId.NotificationException */, evtName + " notification failed: " + e);
}
}
}
function _notifyEvents(evtName, theEvents) {
var extraArgs = [];
for (var _i = 2; _i < arguments.length; _i++) {
extraArgs[_i - 2] = arguments[_i];
}
if (theEvents && theEvents.length > 0) {
_callNotification(evtName, [theEvents].concat(extraArgs));
}
}
function _notifyBatchEvents(evtName, batches) {
var extraArgs = [];
for (var _i = 2; _i < arguments.length; _i++) {
extraArgs[_i - 2] = arguments[_i];
}
if (batches && batches.length > 0) {
arrForEach(batches, function (theBatch) {
if (theBatch && theBatch.count() > 0) {
_callNotification(evtName, [theBatch.events()].concat(extraArgs));
}
});
}
}
/**
* The notification handler for when batches are about to be sent
* @ignore
*/
function _sendingEvent(batches, reason, isSyncRequest) {
if (batches && batches.length > 0) {
_callNotification("eventsSendRequest", [(reason >= 1000 /* EventBatchNotificationReason.SendingUndefined */ && reason <= 1999 /* EventBatchNotificationReason.SendingEventMax */ ?
reason - 1000 /* EventBatchNotificationReason.SendingUndefined */ :
0 /* SendRequestReason.Undefined */), isSyncRequest !== true]);
}
}
/**
* This event represents that a batch of events have been successfully sent and a response received
* @param batches The notification handler for when the batches have been successfully sent
* @param reason For this event the reason will always be EventBatchNotificationReason.Complete
*/
function _eventsSentEvent(batches, reason) {
_notifyBatchEvents("eventsSent", batches, reason);
// Try and schedule the processing timer if we have events
_scheduleTimer();
}
function _eventsDropped(batches, reason) {
_notifyBatchEvents(strEventsDiscarded, batches, (reason >= 8000 /* EventBatchNotificationReason.EventsDropped */ && reason <= 8999 /* EventBatchNotificationReason.EventsDroppedMax */ ?
reason - 8000 /* EventBatchNotificationReason.EventsDropped */ :
EventsDiscardedReason.Unknown));
}
function _eventsResponseFail(batches) {
_notifyBatchEvents(strEventsDiscarded, batches, EventsDiscardedReason.NonRetryableStatus);
// Try and schedule the processing timer if we have events
_scheduleTimer();
}
function _otherEvent(batches, reason) {
_notifyBatchEvents(strEventsDiscarded, batches, EventsDiscardedReason.Unknown);
// Try and schedule the processing timer if we have events
_scheduleTimer();
}
function _setAutoLimits() {
if (!_config || !_config.disableAutoBatchFlushLimit) {
_autoFlushBatchLimit = Math.max(MaxNumberEventPerBatch * (MaxConnections + 1), _queueSizeLimit / 6);
}
else {
_autoFlushBatchLimit = 0;
}
}
// Provided for backward compatibility they are not "expected" to be in current use but they are public
objDefineAccessors(_self, "_setTimeoutOverride", function () { return _timeoutWrapper.set; }, function (value) {
// Recreate the timeout wrapper
_timeoutWrapper = createTimeoutWrapper(value, _timeoutWrapper.clear);
});
objDefineAccessors(_self, "_clearTimeoutOverride", function () { return _timeoutWrapper.clear; }, function (value) {
// Recreate the timeout wrapper
_timeoutWrapper = createTimeoutWrapper(_timeoutWrapper.set, value);
});
});
return _this;
}
// Removed Stub for PostChannel.prototype.initialize.
// Removed Stub for PostChannel.prototype.processTelemetry.
// Removed Stub for PostChannel.prototype.setEventQueueLimits.
// Removed Stub for PostChannel.prototype.pause.
// Removed Stub for PostChannel.prototype.resume.
// Removed Stub for PostChannel.prototype.addResponseHandler.
// Removed Stub for PostChannel.prototype.flush.
// Removed Stub for PostChannel.prototype.setMsaAuthTicket.
// Removed Stub for PostChannel.prototype.hasEvents.
// Removed Stub for PostChannel.prototype._loadTransmitProfiles.
// Removed Stub for PostChannel.prototype._setTransmitProfile.
// Removed Stub for PostChannel.prototype._backOffTransmission.
// Removed Stub for PostChannel.prototype._clearBackOff.
// This is a workaround for an IE8 bug when using dynamicProto() with classes that don't have any
// non-dynamic functions or static properties/functions when using uglify-js to minify the resulting code.
// this will be removed when ES3 support is dropped.
PostChannel.__ieDyn=1;
return PostChannel;
}(BaseTelemetryPlugin));
export default PostChannel;

View File

@ -0,0 +1,49 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
/**
* RetryPolicy.ts
* @author Abhilash Panwar (abpanwar)
* @copyright Microsoft 2018
*/
var RandomizationLowerThreshold = 0.8;
var RandomizationUpperThreshold = 1.2;
var BaseBackoff = 3000;
var MaxBackoff = 600000;
/**
* Determine if the request should be retried for the given status code.
* The below expression reads that we should only retry for:
* - HttpStatusCodes that are smaller than 300.
* - HttpStatusCodes greater or equal to 500 (except for 501-NotImplement
* and 505-HttpVersionNotSupport).
* - HttpStatusCode 408-RequestTimeout.
* - HttpStatusCode 429.
* This is based on Microsoft.WindowsAzure.Storage.RetryPolicies.ExponentialRetry class
* @param httpStatusCode - The status code returned for the request.
* @returns True if request should be retried, false otherwise.
*/
export function retryPolicyShouldRetryForStatus(httpStatusCode) {
/* tslint:disable:triple-equals */
// Disabling triple-equals rule to avoid httpOverrides from failing because they are returning a string value
return !((httpStatusCode >= 300 && httpStatusCode < 500 && httpStatusCode != 408 && httpStatusCode != 429)
|| (httpStatusCode == 501)
|| (httpStatusCode == 505));
/* tslint:enable:triple-equals */
}
/**
* Gets the number of milliseconds to back off before retrying the request. The
* back off duration is exponentially scaled based on the number of retries already
* done for the request.
* @param retriesSoFar - The number of times the request has already been retried.
* @returns The back off duration for the request before it can be retried.
*/
export function retryPolicyGetMillisToBackoffForRetry(retriesSoFar) {
var waitDuration = 0;
var minBackoff = BaseBackoff * RandomizationLowerThreshold;
var maxBackoff = BaseBackoff * RandomizationUpperThreshold;
var randomBackoff = Math.floor(Math.random() * (maxBackoff - minBackoff)) + minBackoff;
waitDuration = Math.pow(2, retriesSoFar) * randomBackoff;
return Math.min(waitDuration, MaxBackoff);
}

View File

@ -0,0 +1,318 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
/**
* Serializer.ts
* @author Abhilash Panwar (abpanwar); Hector Hernandez (hectorh); Nev Wylie (newylie)
* @copyright Microsoft 2018-2020
*/
// @skip-file-minify
import dynamicProto from "@microsoft/dynamicproto-js";
import { arrIndexOf, doPerf, getCommonSchemaMetaData, getTenantId, isArray, isValueAssigned, objForEachKey, sanitizeProperty, strStartsWith } from "@microsoft/1ds-core-js";
import { EventBatch } from "./EventBatch";
import { STR_EMPTY } from "./InternalConstants";
/**
* Note: This is an optimization for V8-based browsers. When V8 concatenates a string,
* the strings are only joined logically using a "cons string" or "constructed/concatenated
* string". These containers keep references to one another and can result in very large
* memory usage. For example, if a 2MB string is constructed by concatenating 4 bytes
* together at a time, the memory usage will be ~44MB; so ~22x increase. The strings are
* only joined together when an operation requiring their joining takes place, such as
* substr(). This function is called when adding data to this buffer to ensure these
* types of strings are periodically joined to reduce the memory footprint.
* Setting to every 20 events as the JSON.stringify() may have joined many strings
* and calling this too much causes a minor delay while processing.
*/
var _MAX_STRING_JOINS = 20;
var RequestSizeLimitBytes = 3984588; // approx 3.8 Mb
var BeaconRequestSizeLimitBytes = 65000; // approx 64kb (the current Edge, Firefox and Chrome max limit)
var MaxRecordSize = 2000000; // approx 2 Mb
var MaxBeaconRecordSize = Math.min(MaxRecordSize, BeaconRequestSizeLimitBytes);
var metadata = "metadata";
var f = "f";
var rCheckDot = /\./;
/**
* Class to handle serialization of event and request.
* Currently uses Bond for serialization. Please note that this may be subject to change.
*/
var Serializer = /** @class */ (function () {
function Serializer(perfManager, valueSanitizer, stringifyObjects, enableCompoundKey) {
var strData = "data";
var strBaseData = "baseData";
var strExt = "ext";
var _checkForCompoundkey = !!enableCompoundKey;
var _processSubMetaData = true;
var _theSanitizer = valueSanitizer;
var _isReservedCache = {};
dynamicProto(Serializer, this, function (_self) {
_self.createPayload = function (retryCnt, isTeardown, isSync, isReducedPayload, sendReason, sendType) {
return {
apiKeys: [],
payloadBlob: STR_EMPTY,
overflow: null,
sizeExceed: [],
failedEvts: [],
batches: [],
numEvents: 0,
retryCnt: retryCnt,
isTeardown: isTeardown,
isSync: isSync,
isBeacon: isReducedPayload,
sendType: sendType,
sendReason: sendReason
};
};
_self.appendPayload = function (payload, theBatch, maxEventsPerBatch) {
var canAddEvents = payload && theBatch && !payload.overflow;
if (canAddEvents) {
doPerf(perfManager, function () { return "Serializer:appendPayload"; }, function () {
var theEvents = theBatch.events();
var payloadBlob = payload.payloadBlob;
var payloadEvents = payload.numEvents;
var eventsAdded = false;
var sizeExceeded = [];
var failedEvts = [];
var isBeaconPayload = payload.isBeacon;
var requestMaxSize = isBeaconPayload ? BeaconRequestSizeLimitBytes : RequestSizeLimitBytes;
var recordMaxSize = isBeaconPayload ? MaxBeaconRecordSize : MaxRecordSize;
var lp = 0;
var joinCount = 0;
while (lp < theEvents.length) {
var theEvent = theEvents[lp];
if (theEvent) {
if (payloadEvents >= maxEventsPerBatch) {
// Maximum events per payload reached, so don't add any more
payload.overflow = theBatch.split(lp);
break;
}
var eventBlob = _self.getEventBlob(theEvent);
if (eventBlob && eventBlob.length <= recordMaxSize) {
// This event will fit into the payload
var blobLength = eventBlob.length;
var currentSize = payloadBlob.length;
if (currentSize + blobLength > requestMaxSize) {
// Request or batch size exceeded, so don't add any more to the payload
payload.overflow = theBatch.split(lp);
break;
}
if (payloadBlob) {
payloadBlob += "\n";
}
payloadBlob += eventBlob;
joinCount++;
// v8 memory optimization only
if (joinCount > _MAX_STRING_JOINS) {
// this substr() should cause the constructed string to join
payloadBlob.substr(0, 1);
joinCount = 0;
}
eventsAdded = true;
payloadEvents++;
}
else {
if (eventBlob) {
// Single event size exceeded so remove from the batch
sizeExceeded.push(theEvent);
}
else {
failedEvts.push(theEvent);
}
// We also need to remove this event from the existing array, otherwise a notification will be sent
// indicating that it was successfully sent
theEvents.splice(lp, 1);
lp--;
}
}
lp++;
}
if (sizeExceeded && sizeExceeded.length > 0) {
payload.sizeExceed.push(EventBatch.create(theBatch.iKey(), sizeExceeded));
// Remove the exceeded events from the batch
}
if (failedEvts && failedEvts.length > 0) {
payload.failedEvts.push(EventBatch.create(theBatch.iKey(), failedEvts));
// Remove the failed events from the batch
}
if (eventsAdded) {
payload.batches.push(theBatch);
payload.payloadBlob = payloadBlob;
payload.numEvents = payloadEvents;
var apiKey = theBatch.iKey();
if (arrIndexOf(payload.apiKeys, apiKey) === -1) {
payload.apiKeys.push(apiKey);
}
}
}, function () { return ({ payload: payload, theBatch: { iKey: theBatch.iKey(), evts: theBatch.events() }, max: maxEventsPerBatch }); });
}
return canAddEvents;
};
_self.getEventBlob = function (eventData) {
try {
return doPerf(perfManager, function () { return "Serializer.getEventBlob"; }, function () {
var serializedEvent = {};
// Adding as dynamic keys for v8 performance
serializedEvent.name = eventData.name;
serializedEvent.time = eventData.time;
serializedEvent.ver = eventData.ver;
serializedEvent.iKey = "o:" + getTenantId(eventData.iKey);
// Assigning local var so usage in part b/c don't throw if there is no ext
var serializedExt = {};
// Part A
var eventExt = eventData[strExt];
if (eventExt) {
// Only assign ext if the event had one (There are tests covering this use case)
serializedEvent[strExt] = serializedExt;
objForEachKey(eventExt, function (key, value) {
var data = serializedExt[key] = {};
// Don't include a metadata callback as we don't currently set metadata Part A fields
_processPathKeys(value, data, "ext." + key, true, null, null, true);
});
}
var serializedData = serializedEvent[strData] = {};
serializedData.baseType = eventData.baseType;
var serializedBaseData = serializedData[strBaseData] = {};
// Part B
_processPathKeys(eventData.baseData, serializedBaseData, strBaseData, false, [strBaseData], function (pathKeys, name, value) {
_addJSONPropertyMetaData(serializedExt, pathKeys, name, value);
}, _processSubMetaData);
// Part C
_processPathKeys(eventData.data, serializedData, strData, false, [], function (pathKeys, name, value) {
_addJSONPropertyMetaData(serializedExt, pathKeys, name, value);
}, _processSubMetaData);
return JSON.stringify(serializedEvent);
}, function () { return ({ item: eventData }); });
}
catch (e) {
return null;
}
};
function _isReservedField(path, name) {
var result = _isReservedCache[path];
if (result === undefined) {
if (path.length >= 7) {
// Do not allow the changing of fields located in the ext.metadata or ext.web extension
result = strStartsWith(path, "ext.metadata") || strStartsWith(path, "ext.web");
}
_isReservedCache[path] = result;
}
return result;
}
function _processPathKeys(srcObj, target, thePath, checkReserved, metadataPathKeys, metadataCallback, processSubKeys) {
objForEachKey(srcObj, function (key, srcValue) {
var prop = null;
if (srcValue || isValueAssigned(srcValue)) {
var path = thePath;
var name_1 = key;
var theMetaPathKeys = metadataPathKeys;
var destObj = target;
// Handle keys with embedded '.', like "TestObject.testProperty"
if (_checkForCompoundkey && !checkReserved && rCheckDot.test(key)) {
var subKeys = key.split(".");
var keyLen = subKeys.length;
if (keyLen > 1) {
if (theMetaPathKeys) {
// Create a copy of the meta path keys so we can add the extra ones
theMetaPathKeys = theMetaPathKeys.slice();
}
for (var lp = 0; lp < keyLen - 1; lp++) {
var subKey = subKeys[lp];
// Add/reuse the sub key object
destObj = destObj[subKey] = destObj[subKey] || {};
path += "." + subKey;
if (theMetaPathKeys) {
theMetaPathKeys.push(subKey);
}
}
name_1 = subKeys[keyLen - 1];
}
}
var isReserved = checkReserved && _isReservedField(path, name_1);
if (!isReserved && _theSanitizer && _theSanitizer.handleField(path, name_1)) {
prop = _theSanitizer.value(path, name_1, srcValue, stringifyObjects);
}
else {
prop = sanitizeProperty(name_1, srcValue, stringifyObjects);
}
if (prop) {
// Set the value
var newValue = prop.value;
destObj[name_1] = newValue;
if (metadataCallback) {
metadataCallback(theMetaPathKeys, name_1, prop);
}
if (processSubKeys && typeof newValue === "object" && !isArray(newValue)) {
var newPath = theMetaPathKeys;
if (newPath) {
newPath = newPath.slice();
newPath.push(name_1);
}
// Make sure we process sub objects as well (for value sanitization and metadata)
_processPathKeys(srcValue, newValue, path + "." + name_1, checkReserved, newPath, metadataCallback, processSubKeys);
}
}
}
});
}
});
}
// Removed Stub for Serializer.prototype.createPayload.
// Removed Stub for Serializer.prototype.appendPayload.
// Removed Stub for Serializer.prototype.getEventBlob.
// Removed Stub for Serializer.prototype.handleField.
// Removed Stub for Serializer.prototype.getSanitizer.
// This is a workaround for an IE8 bug when using dynamicProto() with classes that don't have any
// non-dynamic functions or static properties/functions when using uglify-js to minify the resulting code.
// this will be removed when ES3 support is dropped.
Serializer.__ieDyn=1;
return Serializer;
}());
export { Serializer };
/**
* @ignore
*/
function _addJSONPropertyMetaData(json, propKeys, name, propertyValue) {
if (propertyValue && json) {
var encodedTypeValue = getCommonSchemaMetaData(propertyValue.value, propertyValue.kind, propertyValue.propertyType);
if (encodedTypeValue > -1) {
// Add the root metadata
var metaData = json[metadata];
if (!metaData) {
// Sets the root 'f'
metaData = json[metadata] = { f: {} };
}
var metaTarget = metaData[f];
if (!metaTarget) {
// This can occur if someone has manually added an ext.metadata object
// Such as ext.metadata.privLevel and ext.metadata.privTags
metaTarget = metaData[f] = {};
}
// Traverse the metadata path and build each object (contains an 'f' key) -- if required
if (propKeys) {
for (var lp = 0; lp < propKeys.length; lp++) {
var key = propKeys[lp];
if (!metaTarget[key]) {
metaTarget[key] = { f: {} };
}
var newTarget = metaTarget[key][f];
if (!newTarget) {
// Not expected, but can occur if the metadata context was pre-created as part of the event
newTarget = metaTarget[key][f] = {};
}
metaTarget = newTarget;
}
}
metaTarget = metaTarget[name] = {};
if (isArray(propertyValue.value)) {
metaTarget["a"] = {
t: encodedTypeValue
};
}
else {
metaTarget["t"] = encodedTypeValue;
}
}
}
}

View File

@ -0,0 +1,27 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
/**
* TimeoutOverrideWrapper.ts
* @author Nev Wylie (newylie)
* @copyright Microsoft 2022
* Simple internal timeout wrapper
*/
export function defaultSetTimeout(callback, ms) {
var args = [];
for (var _i = 2; _i < arguments.length; _i++) {
args[_i - 2] = arguments[_i];
}
return setTimeout(callback, ms, args);
}
export function defaultClearTimeout(timeoutId) {
clearTimeout(timeoutId);
}
export function createTimeoutWrapper(argSetTimeout, argClearTimeout) {
return {
set: argSetTimeout || defaultSetTimeout,
clear: argClearTimeout || defaultClearTimeout
};
}

View File

@ -0,0 +1,10 @@
/*
* 1DS JS SDK POST plugin, 3.2.13
* Copyright (c) Microsoft and contributors. All rights reserved.
* (Microsoft Internal Only)
*/
export {};
// export declare var XDomainRequest: {
// prototype: IXDomainRequest;
// new (): IXDomainRequest;
// };

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,45 @@
{
"name": "@microsoft/1ds-post-js",
"version": "3.2.13",
"description": "Microsoft Application Insights JavaScript SDK - 1ds-post-js extensions",
"author": "Microsoft Application Insights Team",
"homepage": "https://github.com/microsoft/ApplicationInsights-JS#readme",
"license": "MIT",
"sideEffects": false,
"scripts": {
"ai-min": "grunt post-min",
"ai-restore": "grunt post-restore",
"publishPackage": "npm publish",
"sri": "node ../../tools/subResourceIntegrity/generateIntegrityFile.js",
"npm-pack": "npm pack"
},
"publishConfig": {
"registry": "https://registry.npmjs.org"
},
"dependencies": {
"@microsoft/applicationinsights-shims": "^2.0.2",
"@microsoft/dynamicproto-js": "^1.1.7",
"@microsoft/1ds-core-js": "3.2.13"
},
"devDependencies": {
"grunt": "^1.4.1",
"typescript": "^4.3.5"
},
"repository": {
"type": "git",
"url": "https://github.com/microsoft/ApplicationInsights-JS"
},
"main": "dist/ms.post.js",
"module": "dist-esm/src/Index.js",
"keywords": [
"1ds",
"azure",
"cloud",
"script errors",
"microsoft",
"application insights",
"Js",
"SDK"
],
"types": "dist-esm/src/Index.d.ts"
}

View File

@ -0,0 +1,24 @@
{
"compilerOptions": {
"sourceMap": true,
"inlineSources": true,
"module": "es6",
"moduleResolution": "Node",
"target": "es3",
"alwaysStrict": true,
"strictNullChecks": false,
"suppressImplicitAnyIndexErrors": true,
"allowSyntheticDefaultImports": true,
"importHelpers": true,
"noEmitHelpers": true,
"forceConsistentCasingInFileNames": true,
"declaration": true,
"outDir": "dist-esm/src/"
},
"include": [
"./src/**/*.ts"
],
"exclude": [
"node_modules/"
]
}