* Increase organization max seat size from 30k to 2b (#1274)
* Increase organization max seat size from 30k to 2b
* PR review. Do not modify unless state matches expected
* Organization sync simultaneous event reporting (#1275)
* Split up azure messages according to max size
* Allow simultaneous login of organization user events
* Early resolve small event lists
* Clarify logic
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Improve readability
This comes at the cost of multiple serializations, but the
improvement in wire-time should more than make up for this
on message where serialization time matters
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Queue emails (#1286)
* Extract common Azure queue methods
* Do not use internal entity framework namespace
* Prefer IEnumerable to IList unless needed
All of these implementations were just using `Count == 1`,
which is easily replicated. This will be used when abstracting Azure queues
* Add model for azure queue message
* Abstract Azure queue for reuse
* Creat service to enqueue mail messages for later processing
Azure queue mail service uses Azure queues.
Blocking just blocks until all the work is done -- This is
how emailing works today
* Provide mail queue service to DI
* Queue organization invite emails for later processing
All emails can later be added to this queue
* Create Admin hosted service to process enqueued mail messages
* Prefer constructors to static generators
* Mass delete organization users (#1287)
* Add delete many to Organization Users
* Correct formatting
* Remove erroneous migration
* Clarify parameter name
* Formatting fixes
* Simplify bump account revision sproc
* Formatting fixes
* Match file names to objects
* Indicate if large import is expected
* Early pull all existing users we were planning on inviting (#1290)
* Early pull all existing users we were planning on inviting
* Improve sproc name
* Batch upsert org users (#1289)
* Add UpsertMany sprocs to OrganizationUser
* Add method to create TVPs from any object.
Uses DbOrder attribute to generate.
Sproc will fail unless TVP column order matches that of the db type
* Combine migrations
* Correct formatting
* Include sql objects in sql project
* Keep consisten parameter names
* Batch deletes for performance
* Correct formatting
* consolidate migrations
* Use batch methods in OrganizationImport
* Declare @BatchSize
* Transaction names limited to 32 chars
Drop sproc before creating it if it exists
* Update import tests
* Allow for more users in org upgrades
* Fix formatting
* Improve class hierarchy structure
* Use name tuple types
* Fix formatting
* Front load all reflection
* Format constructor
* Simplify ToTvp as class-specific extension
Co-authored-by: Chad Scharf <3904944+cscharf@users.noreply.github.com>
* Add Cipher attachment upload endpoints
* Add validation bool to attachment storage data
This bool is used to determine whether or not to renew upload links
* Add model to request a new attachment to be made for later upload
* Add model to respond with created attachment.
The two cipher properties represent the two different
cipher model types that can be returned. Cipher Response from
personal items and mini response from organizations
* Create Azure SAS-authorized upload links for both one-shot and block uploads
* Add service methods to handle delayed upload and file size validation
* Add emergency access method for downloading attachments direct from Azure
* Add new attachment storage methods to other services
* Update service interfaces
* Log event grid exceptions
* Limit Send and Attachment Size to 500MB
* capitalize Key property
* Add key validation to Azure Event Grid endpoint
* Delete blob for unexpected blob creation events
* Set Event Grid key at API startup
* Change renew attachment upload url request path to match Send
* Shore up attachment cleanup method.
As long as we have the required information, we should always delete
attachments from each the Repository, the cipher in memory, and the
file storage service to ensure they're all synched.
* Get limited life attachment download URL
This change limits url download to a 1min lifetime.
This requires moving to a new container to allow for non-public blob
access.
Clients will have to call GetAttachmentData api function to receive the download
URL. For backwards compatibility, attachment URLs are still present, but will not
work for attachments stored in non-public access blobs.
* Make GlobalSettings interface for testing
* Test LocalAttachmentStorageService equivalence
* Remove comment
* Add missing globalSettings using
* Simplify default attachment container
* Default to attachments containe for existing methods
A new upload method will be made for uploading to attachments-v2.
For compatibility for clients which don't use these new methods, we need
to still use the old container. The new container will be used only for
new uploads
* Remove Default MetaData fixture.
* Keep attachments container blob-level security for all instances
* Close unclosed FileStream
* Favor default value for noop services