* Increase organization max seat size from 30k to 2b (#1274)
* Increase organization max seat size from 30k to 2b
* PR review. Do not modify unless state matches expected
* Organization sync simultaneous event reporting (#1275)
* Split up azure messages according to max size
* Allow simultaneous login of organization user events
* Early resolve small event lists
* Clarify logic
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Improve readability
This comes at the cost of multiple serializations, but the
improvement in wire-time should more than make up for this
on message where serialization time matters
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Queue emails (#1286)
* Extract common Azure queue methods
* Do not use internal entity framework namespace
* Prefer IEnumerable to IList unless needed
All of these implementations were just using `Count == 1`,
which is easily replicated. This will be used when abstracting Azure queues
* Add model for azure queue message
* Abstract Azure queue for reuse
* Creat service to enqueue mail messages for later processing
Azure queue mail service uses Azure queues.
Blocking just blocks until all the work is done -- This is
how emailing works today
* Provide mail queue service to DI
* Queue organization invite emails for later processing
All emails can later be added to this queue
* Create Admin hosted service to process enqueued mail messages
* Prefer constructors to static generators
* Mass delete organization users (#1287)
* Add delete many to Organization Users
* Correct formatting
* Remove erroneous migration
* Clarify parameter name
* Formatting fixes
* Simplify bump account revision sproc
* Formatting fixes
* Match file names to objects
* Indicate if large import is expected
* Early pull all existing users we were planning on inviting (#1290)
* Early pull all existing users we were planning on inviting
* Improve sproc name
* Batch upsert org users (#1289)
* Add UpsertMany sprocs to OrganizationUser
* Add method to create TVPs from any object.
Uses DbOrder attribute to generate.
Sproc will fail unless TVP column order matches that of the db type
* Combine migrations
* Correct formatting
* Include sql objects in sql project
* Keep consisten parameter names
* Batch deletes for performance
* Correct formatting
* consolidate migrations
* Use batch methods in OrganizationImport
* Declare @BatchSize
* Transaction names limited to 32 chars
Drop sproc before creating it if it exists
* Update import tests
* Allow for more users in org upgrades
* Fix formatting
* Improve class hierarchy structure
* Use name tuple types
* Fix formatting
* Front load all reflection
* Format constructor
* Simplify ToTvp as class-specific extension
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Add Cipher attachment upload endpoints
* Add validation bool to attachment storage data
This bool is used to determine whether or not to renew upload links
* Add model to request a new attachment to be made for later upload
* Add model to respond with created attachment.
The two cipher properties represent the two different
cipher model types that can be returned. Cipher Response from
personal items and mini response from organizations
* Create Azure SAS-authorized upload links for both one-shot and block uploads
* Add service methods to handle delayed upload and file size validation
* Add emergency access method for downloading attachments direct from Azure
* Add new attachment storage methods to other services
* Update service interfaces
* Log event grid exceptions
* Limit Send and Attachment Size to 500MB
* capitalize Key property
* Add key validation to Azure Event Grid endpoint
* Delete blob for unexpected blob creation events
* Set Event Grid key at API startup
* Change renew attachment upload url request path to match Send
* Shore up attachment cleanup method.
As long as we have the required information, we should always delete
attachments from each the Repository, the cipher in memory, and the
file storage service to ensure they're all synched.
* Direct upload to azure
To validate file sizes in the event of a rogue client, Azure event webhooks
will be hooked up to AzureValidateFile.
Sends outside of a grace size will be deleted as non-compliant.
TODO: LocalSendFileStorageService direct upload method/endpoint.
* Quick respond to no-body event calls
These shouldn't happen, but might if some errant get requests occur
* Event Grid only POSTS to webhook
* Enable local storage direct file upload
* Increase file size difference leeway
* Upload through service
* Fix LocalFileSendStorage
It turns out that multipartHttpStreams do not have a length
until read. this causes all long files to be "invalid". We need to
write the entire stream, then validate length, just like Azure.
the difference is, We can return an exception to local storage
admonishing the client for lying
* Update src/Api/Utilities/ApiHelpers.cs
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Do not delete directory if it has files
* Allow large uploads for self hosted instances
* Fix formatting
* Re-verfiy access and increment access count on download of Send File
* Update src/Core/Services/Implementations/SendService.cs
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Add back in original Send upload
* Update size and mark as validated upon Send file validation
* Log azure file validation errors
* Lint fix
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Add sendId to path
Event Grid returns the blob path, which will be used to grab a Send and verify file size
* Re-validate access upon file download
Increment access count only when file is downloaded. File
name and size are leaked, but this is a good first step toward
solving the access-download race
* Remove Url from SendFileModel
Url is now generated on the fly with limited lifetime.
New model houses the download url generated
* Create API endpoint for getting Send file download url
* Generate limited-life Azure download urls
* Lint fix
* Get limited life attachment download URL
This change limits url download to a 1min lifetime.
This requires moving to a new container to allow for non-public blob
access.
Clients will have to call GetAttachmentData api function to receive the download
URL. For backwards compatibility, attachment URLs are still present, but will not
work for attachments stored in non-public access blobs.
* Make GlobalSettings interface for testing
* Test LocalAttachmentStorageService equivalence
* Remove comment
* Add missing globalSettings using
* Simplify default attachment container
* Default to attachments containe for existing methods
A new upload method will be made for uploading to attachments-v2.
For compatibility for clients which don't use these new methods, we need
to still use the old container. The new container will be used only for
new uploads
* Remove Default MetaData fixture.
* Keep attachments container blob-level security for all instances
* Close unclosed FileStream
* Favor default value for noop services
* added OnlyOrg to PolicyType enum
* blocked accepting new org invitations if OnlyOrg is relevant to the userOrg
* blocked creating new orgs if already in an org with OnlyOrg enabled
* created email alert for OnlyOrg policy
* removed users & sent alerts when appropriate for the OnlyOrg policy
* added method to noop mail service
* cleanup for OnlyOrg policy server logic
* blocked confirming new org users if they have violated the OnlyOrg policy since accepting
* added localization strings needed for the OnlyOrg policy
* allowed OnlyOrg policy configuration from the portal
* used correct localization key for onlyorg
* formatting and messaging changes for OnlyOrg
* formatting
* messaging change
* code review changes for onlyorg
* slimmed down a conditional
* optimized getting many orgUser records from many userIds
* removed a test file
* sql formatting
* weirdness
* trying to resolve git diff formatting issues
* Add email notification on Two Factor recovery use
* A user who has lost their 2fa device can clear out the
2fa settings using a recovery code. When this happens
it gets logged but no notification to the user occurs.
* Add a notification to be sent when 2fa recovery code is
used
* Add email message templates