Settings Results in 4 milliseconds

MSB1009: Docker File Error when building Asp.Net 5
Category: .Net 7

The most common Error when building a Docker File for Asp.Net Cor ...


Views: 586 Likes: 67
[Solved] Visual Studio is Failing to Build
Category: Technology

Questions When building an application in Visual ...


Views: 371 Likes: 75
MSBUILD : error MSB1009: Project file does not exi ...
Category: .Net 7

Error building Dot Net Core 2.2 Docker Image from a Docker File <span style="background-c ...


Views: 8821 Likes: 166
Onboarding users in ASP.NET Core using Azure AD Temporary Access Pass and Microsoft Graph
Onboarding users in ASP.NET Core using Azure AD Te ...

The article looks at onboarding different Azure AD users with a temporary access pass (TAP) and some type of passwordless authentication. An ASP.NET Core application is used to create the Azure AD member users which can then use a TAP to setup the account. This is a great way to onboard users in your tenant. Code https//github.com/damienbod/AzureAdTapOnboarding The ASP.NET Core application needs to onboard different type of Azure AD users. Some users cannot use a passwordless authentication (yet) and so a password setup is also required for these users. TAP only works with members and we also need to support guest users with some alternative onboarding flow. Different type of user flows are supported or possible AAD member user flow with TAP and FIDO2 authentication AAD member user flow with password using email/password authentication AAD member user flow with password setup and a phone authentication AAD guest user flow with federated login AAD guest user flow with Microsoft account AAD guest user flow with email code FIDO2 should be used for all enterprise employees with an office account in the enterprise. If this is not possible, then at least the IT administrators should be forced to use FIDO2 authentication and the companies should be planning on a strategy on how to move to a phishing resistant authentication. This could be forced with a PIM and a continuous access policy for administration jobs. Using FIDO2, the identities are protected with a phishing resistant authentication. This should be a requirement for any professional solution. Azure AD users with no computer can use an email code or a SMS authentication. This is a low security authentication and applications should not expose sensitive information to these user types. Setup The ASP.NET Core application uses Microsoft.Identity.Web and the Microsoft.Identity.Web.MicrosoftGraphBeta Nuget packages to implement the Azure AD clients. The ASP.NET Core client is a server rendered application and uses an Azure App registration which requires a secret or a certificate to acquire access tokens. The onboarding application uses Microsoft Graph applications permissions to create the users and initialize the temporary access pass (TAP) flow. The following application permissions are used User.EnableDisableAccount.All User.ReadWrite.All UserAuthenticationMethod.ReadWrite.All The permissions are added to a separate Azure App registration and require a secret to use. In a second phase, I will look at implementing the Graph API access using Microsoft Graph delegated permissions. It is also possible to use a service managed identity to acquire a Graph access token with the required permissions. Onboarding members using passwordless When onboarding a new Azure AD user with passwordless and TAP, this needs to be implemented in two steps. Firstly, a new Microsoft Graph user is created with the type member. This takes an unknown length of time to complete on Azure AD. When this is finished, a new TAP authentication method is created. I used the Polly Nuget package to retry this until the TAP request succeeds. Once successful, the temporary access pass is displayed in the UI. If this was a new employee or something like this, you could print this out and let the user complete the process. private async Task CreateMember(UserModel userData) { var createdUser = await _aadGraphSdkManagedIdentityAppClient .CreateGraphMemberUserAsync(userData); if (createdUser!.Id != null) { if (userData.UsePasswordless) { var maxRetryAttempts = 7; var pauseBetweenFailures = TimeSpan.FromSeconds(3); var retryPolicy = Policy .Handle<HttpRequestException>() .WaitAndRetryAsync(maxRetryAttempts, i => pauseBetweenFailures); await retryPolicy.ExecuteAsync(async () => { var tap = await _aadGraphSdkManagedIdentityAppClient .AddTapForUserAsync(createdUser.Id); AccessInfo = new CreatedAccessModel { Email = createdUser.Email, TemporaryAccessPass = tap!.TemporaryAccessPass }; }); } else { AccessInfo = new CreatedAccessModel { Email = createdUser.Email, Password = createdUser.Password }; } } } The CreateGraphMemberUserAsync method creates a new Microsoft Graph user. To use a temporary access pass, a member user must be used. Guest users cannot be onboarded like this. Even though we do not use a password in this process, the Microsoft Graph user validation forces us to create one. We just create a random password and will not return this, This password will not be updated. public async Task<CreatedUserModel> CreateGraphMemberUserAsync (UserModel userModel) { if (!userModel.Email.ToLower().EndsWith(_aadIssuerDomain.ToLower())) { throw new ArgumentException("A guest user must be invited!"); } var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var password = GetRandomString(); var user = new User { DisplayName = userModel.UserName, Surname = userModel.LastName, GivenName = userModel.FirstName, OtherMails = new List<string> { userModel.Email }, UserType = "member", AccountEnabled = true, UserPrincipalName = userModel.Email, MailNickname = userModel.UserName, PasswordProfile = new PasswordProfile { Password = password, // We use TAP if a paswordless onboarding is used ForceChangePasswordNextSignIn = !userModel.UsePasswordless }, PasswordPolicies = "DisablePasswordExpiration" }; var createdUser = await graphServiceClient.Users .Request() .AddAsync(user); return new CreatedUserModel { Email = createdUser.UserPrincipalName, Id = createdUser.Id, Password = password }; } The TemporaryAccessPassAuthenticationMethod object is created using Microsoft Graph. We create a use once TAP. The access code is returned and displayed in the UI. public async Task<TemporaryAccessPassAuthenticationMethod?> AddTapForUserAsync(string userId) { var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var tempAccessPassAuthMethod = new TemporaryAccessPassAuthenticationMethod { //StartDateTime = DateTimeOffset.Now, LifetimeInMinutes = 60, IsUsableOnce = true, }; var result = await graphServiceClient.Users[userId] .Authentication .TemporaryAccessPassMethods .Request() .AddAsync(tempAccessPassAuthMethod); return result; } The https//aka.ms/mysecurityinfo link can be used to complete the flow. The new user can click this link and enter the email and the access code. Now that the user is authenticated, he or she can add a passwordless authentication method. I use an external FIDO2 key. Once setup, the user can register and authenticate. You should use at least two security keys. This is an awesome way of onboarding users which allows users to authenticate in a phishing resistant way without requiring or using a password. FIDO2 is the recommended and best way of authenticating users and with the rollout of passkeys, this will become more user friendly as well. Onboarding members using password Due to the fact that some companies still use legacy authentication or we would like to support users with no computer, we also need to onboard users with passwords. When using passwords, the user needs to update the password on first use. The user should add an MFA, if not forced by the tenant. Some employees might not have a computer and would like user a phone to authenticate. An SMS code would be a good way of achieving this. This is of course not very secure, so you should expect these accounts to get lost or breached and so sensitive data should be avoided for applications used by these accounts. The device code flow could be used together on a shared PC with the user mobile phone. Starting an authentication flow from a QR Code is unsecure as this is not safe against phishing but as SMS is used for these type of users, it’s already not very secure. Again sensitive data must be avoided for applications accepting these low security accounts. It’s all about balance, maybe someday soon, all users will have FIDO2 security keys or passkeys to use and we can avoid these sort of solutions. Onboarding guest users (invitations) Guest users cannot be onboarded by creating a Microsoft Graph user. You need to send an invitation to the guest user for your tenant. Microsoft Graph provides an API for this. There a different type of guest users, depending on the account type and the authentication method type. The invitation returns an invite redeem URL which can be used to setup the account. This URL is mailed to the email used in the invite and does not need to be displayed in the UI. private async Task InviteGuest(UserModel userData) { var invitedGuestUser = await _aadGraphSdkManagedIdentityAppClient .InviteGuestUser(userData, _inviteUrl); if (invitedGuestUser!.Id != null) { AccessInfo = new CreatedAccessModel { Email = invitedGuestUser.InvitedUserEmailAddress, InviteRedeemUrl = invitedGuestUser.InviteRedeemUrl }; } } The InviteGuestUser method is used to create the invite object and this is sent as a HTTP post request to the Microsoft Graph API. public async Task<Invitation?> InviteGuestUser (UserModel userModel, string redirectUrl) { if (userModel.Email.ToLower().EndsWith(_aadIssuerDomain.ToLower())) { throw new ArgumentException("user must be from a different domain!"); } var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var invitation = new Invitation { InvitedUserEmailAddress = userModel.Email, SendInvitationMessage = true, InvitedUserDisplayName = $"{userModel.FirstName} {userModel.LastName}", InviteRedirectUrl = redirectUrl, InvitedUserType = "guest" }; var invite = await graphServiceClient.Invitations .Request() .AddAsync(invitation); return invite; } Notes Onboarding users with Microsoft Graph can be complicated because you need to know which parameters and how the users need to be created. Azure AD members can be created using the Microsoft Graph user APIs, guest users are created using the Microsoft Graph invitation APIs. Onboarding users with TAP and FIDO2 is a great way of doing implementing this workflow. As of today, this is still part of the beta release. Links https//entra.microsoft.com/ https//learn.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-temporary-access-pass https//learn.microsoft.com/en-us/graph/api/authentication-post-temporaryaccesspassmethods?view=graph-rest-1.0&tabs=csharp https//learn.microsoft.com/en-us/graph/authenticationmethods-get-started https//learn.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises Create Azure B2C users with Microsoft Graph and ASP.NET Core Onboarding new users in an ASP.NET Core application using Azure B2C Disable Azure AD user account using Microsoft Graph and an application client Invite external users to Azure AD using Microsoft Graph and ASP.NET Core https//learn.microsoft.com/en-us/azure/active-directory/external-identities/external-identities-overview https//learn.microsoft.com/en-us/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal


Asp.Net Core 3.1 Entity Framework Core DateTime fi ...
Category: .Net 7

&nbsp;Question How do you resolve the datetime that results to 1/1/0001 ...


Views: 233 Likes: 74
Performance Improvements in ASP.NET Core 8
Performance Improvements in ASP.NET Core 8

ASP.NET Core 8 and .NET 8 bring many exciting performance improvements. In this blog post, we will highlight some of the enhancements made in ASP.NET Core and show you how they can boost your web app’s speed and efficiency. This is a continuation of last year’s post on Performance improvements in ASP.NET Core 7. And, of course, it continues to be inspired by Performance Improvements in .NET 8. Many of those improvements either indirectly or directly improve the performance of ASP.NET Core as well. Benchmarking Setup We will use BenchmarkDotNet for many of the examples in this blog post. To setup a benchmarking project Create a new console app (dotnet new console) Add a Nuget reference to BenchmarkDotnet (dotnet add package BenchmarkDotnet) version 0.13.8+ Change Program.cs to var summary = BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(); Add the benchmarking code snippet below that you want to run Run dotnet run -c Release and enter the number of the benchmark you want to run when prompted Some of the benchmarks test internal types, and a self-contained benchmark cannot be written. In those cases we’ll either reference numbers that are gotten by running the benchmarks in the repository (and link to the code in the repository), or we’ll provide a simplified example to showcase what the improvement is doing. There are also some cases where we will reference our end-to-end benchmarks which are public at https//aka.ms/aspnet/benchmarks. Although we only display the last few months of data so that the page will load in a reasonable amount of time. Servers We have 3 server implementations in ASP.NET Core; Kestrel, Http.Sys, and IIS. The latter two are only usable on Windows and share a lot of code. Server performance is extremely important because it’s what processes incoming requests and forwards them to your application code. The faster we can process a request, the faster you can start running application code. Kestrel Header parsing is one of the first parts of processing done by a server for every request. Which means the performance is critical to allow requests to reach your application code as fast as possible. In Kestrel we read bytes off the connection into a System.IO.Pipelines.Pipe which is essentially a list of byte[]s. When parsing headers we are reading from that list of byte[]s and have two different code paths. One for when the full header is inside a single byte[] and another for when a header is split across multiple byte[]s. dotnet/aspnetcore#45044 updated the second (slower) code path to avoid allocating a byte[] when parsing the header, as well as optimizes our SequenceReader usage to mostly use the underlying ReadOnlySequence<byte> which can be faster in some cases. This resulted in a ~18% performance improvement for multi-span headers as well as making it allocation free which helps reduce GC pressure. The following microbenchmark is using internal types in Kestrel and isn’t easy to isolate as a minimal sample. For those interested it is located with the Kestrel source code and was run before and after the change. Method Mean Op/s Gen 0 Allocated MultispanUnicodeHeader – Before 573.8 ns 1,742,893.2 – 48 B MultispanUnicodeHeader – After 484.9 ns 2,062,450.8 – – Below is an allocation profile of an end-to-end benchmark we run on our CI showing the different with this change. We reduced the byte[] allocations of the scenario by 73%. From 7.8GB to 2GB (during the lifetime of the benchmark run). dotnet/aspnetcore#48368 replaced some internal custom vectorized code for ascii comparison checks with the new Ascii class in .NET 8. This allowed us to remove ~400 lines of code and take advantage of improvements like AVX512 and ARM AdvSIMD that are implemented in the Ascii code that we didn’t have in Kestrel’s implementation. Http.Sys Near the end of 7.0 we removed some extra thread pool dispatching in Kestrel that improved performance significantly. More details are in last years performance post. At the beginning of 8.0 we made similar changes to the Http.Sys server in dotnet/aspnetcore#44409. This improved our Json end to end benchmark by 11% from ~469k to ~522k RPS. Another change we made affects large responses especially in higher latency connections. dotnet/aspnetcore#47776 adds an on-by-default option to enable Kernel-mode response buffering. This allows application writes to be buffered in the OS layer regardless of whether the client connection has acked previous writes or not, and then the OS can optimize sending the data by parallelizing writes and/or sending larger chunks of data at a time. The benefits are clear when using connections with higher latency. To show a specific example we hosted a server in Sweden and a client in West Coast USA to create some latency in the connection. The following server code was used var builder = WebApplication.CreateBuilder(args); builder.WebHost.UseHttpSys(options => { options.UrlPrefixes.Add("http//+12345"); options.Authentication.Schemes = AuthenticationSchemes.None; options.Authentication.AllowAnonymous = true; options.EnableKernelResponseBuffering = true; // <-- new setting in 8.0 }); var app = builder.Build(); app.UseRouting(); app.MapGet("/file", () => { return TypedResults.File(File.Open("pathToLargeFile", FileMode.Open, FileAccess.Read)); }); app.Run(); The latency was around 200ms (round-trip) between client and server and the server was responding to client requests with a 212MB file. When setting HttpSysOptions.EnableKernelResponseBuffering to false the file download took ~11 minutes. And when setting it to true it took ~30 seconds to download the file. That’s a massive improvement, ~22x faster in this specific scenario! More details on how response buffering works can be found in this blog post. dotnet/aspnetcore#44561 refactors the internals of response writing in Http.Sys to remove a bunch of GCHandle allocations and conveniently removes a List<GCHandle> that was used to track handles for freeing. It does this by allocating and writing directly to NativeMemory when writing headers. By not pinning managed memory we are reducing GC pressure and helping reduce heap fragmentation. A downside is that we need to be extra careful to free the memory because the allocations are no longer tracked by the GC. Running a simple web app and tracking GCHandle usage shows that in 7.0 a small response with 4 headers was using 8 GCHandles per request, and when adding more headers it was using 2 more GCHandles per header. In 8.0 the same app was using only 4 GCHandles per request, regardless of the number of headers. dotnet/aspnetcore#45156 by @ladeak improved the implementation of HttpContext.Request.Headers.Keys and HttpContext.Request.Headers.Count in Http.Sys, which is also the same implementation used by IIS so double win. Before, those properties had generic implementations that used IEnumerable and linq expressions. Now they manually count and minimize allocations, making accessing Count completely allocation free. This benchmark uses internal types, so I’ll link to the microbenchmark source instead of providing a standalone microbenchmark. Before Method Mean Op/s Gen 0 Allocated CountSingleHeader 381.3 ns 2,622,896.1 0.0010 176 B CountLargeHeaders 3,293.4 ns 303,639.9 0.0534 9,032 B KeysSingleHeader 483.5 ns 2,068,299.5 0.0019 344 B KeysLargeHeaders 3,559.4 ns 280,947.4 0.0572 9,648 B After Method Mean Op/s Gen 0 Allocated CountSingleHeader 249.1 ns 4,014,316.0 – – CountLargeHeaders 278.3 ns 3,593,059.3 – – KeysSingleHeader 506.6 ns 1,974,125.9 – 32 B KeysLargeHeaders 1,314.6 ns 760,689.5 0.0172 2,776 B Native AOT Native AOT was first introduced in .NET 7 and only worked with console applications and a limited number of libraries. In .NET 8.0 we’ve improved the number of libraries that are supported in Native AOT as well as added support for ASP.NET Core applications. AOT apps can have minimized disk footprint, reduced startup times, and reduced memory demand. But before we talk about AOT more and show some numbers, we should talk about a prerequisite, trimming. Starting in .NET 6 trimming applications became a fully supported feature. Enabling this feature with <PublishTrimmed>true</PublishTrimmed> in your .csproj enables the trimmer to run during publish and remove code your application isn’t using. This can result in smaller deployed application sizes, useful in scenarios where you are running on memory constrained devices. Trimming isn’t free though, libraries might need to annotate types and method calls to tell the trimmer about code being used that the trimmer can’t determine, otherwise the trimmer might trim away code you’re relying on and your app won’t run as expected. The trimmer will raise warnings when it sees code that might not be compatible with trimming. Until .NET 8 the <TrimMode> property for publishing web apps was set to partial. This meant that only assemblies that explicitly stated they supported trimming would be trimmed. Now in 8.0, full is used for <TrimMode> which means all assemblies used by the app will be trimmed. These settings are documented in the trimming options docs. In .NET 6 and .NET 7 a lot of libraries weren’t compatible with trimming yet, notably ASP.NET Core libraries. If you tried to publish a simple ASP.NET Core app in 7.0 you would get a bunch of trimmer warnings because most of ASP.NET Core didn’t support trimming yet. The following is an ASP.NET Core app to show trimming in net7.0 vs. net8.0. All the numbers are for a windows publish. <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFrameworks>net7.0;net8.0</TargetFrameworks> <Nullable>enable</Nullable> <ImplicitUsings>enable</ImplicitUsings> </PropertyGroup> </Project> // dotnet publish --self-contained --runtime win-x64 --framework net7.0 -pPublishTrimmed=true -pPublishSingleFile=true --configuration Release var app = WebApplication.Create(); app.Run((c) => c.Response.WriteAsync("hello world")); app.Run(); TFM Trimmed Warnings App Size Publish duration net7.0 false 0 88.4MB 3.9 sec net8.0 false 0 90.9MB 3.9 sec net7.0 true 16 28.9MB 16.4 sec net8.0 true 0 17.3MB 10.8 sec In addition to no more warnings when publishing trimmed in net8.0, the app size is smaller because we’ve annotated more libraries so the linker can find more code that isn’t being used by the app. Part of annotating the libraries involved analyzing what code is being kept by the trimmer and changing code to improve what can be trimmed. You can see numerous PRs to help this effort; dotnet/aspnetcore#47567, dotnet/aspnetcore#47454, dotnet/aspnetcore#46082, dotnet/aspnetcore#46015, dotnet/aspnetcore#45906, dotnet/aspnetcore#46020, and many more. The Publish duration field was calculated using the Measure-Command in powershell (and deleting /bin/ and /obj/ between every run). As you can see, enabling trimming can increase the publish time because the trimmer has to analyze the whole program to see what it can remove, which isn’t a free operation. We also introduced two smaller versions of WebApplication if you want even smaller apps via CreateSlimBuilder and CreateEmptyBuilder. Changing the previous app to use CreateSlimBuilder // dotnet publish --self-contained --runtime win-x64 --framework net8.0 -pPublishTrimmed=true -pPublishSingleFile=true --configuration Release var builder = WebApplication.CreateSlimBuilder(args); var app = builder.Create(); app.Run((c) => c.Response.WriteAsync("hello world")); app.Run(); will result in an app size of 15.5MB. And then going one step further with CreateEmptyBuilder // dotnet publish --self-contained --runtime win-x64 --framework net8.0 -pPublishTrimmed=true -pPublishSingleFile=true --configuration Release var builder = WebApplication.CreateEmptyBuilder(new WebApplicationOptions() { Args = args }); var app = builder.Create(); app.Run((c) => c.Response.WriteAsync("hello world")); app.Run(); will result in an app size of 13.7MB, although in this case the app won’t work because there is no server implementation registered. So if we add Kestrel via builder.WebHost.UseKestrelCore(); the app size becomes 15MB. TFM Builder App Size net8.0 Create 17.3MB net8.0 Slim 15.5MB net8.0 Empty 13.7MB net8.0 Empty+Server 15.0MB Note that both these APIs are available starting in 8.0 and remove a lot of defaults so it’s more pay for play. Now that we’ve taken a small look at trimming and seen that 8.0 has more trim compatible libraries, let’s take a look at Native AOT. Just like with trimming, if your app/library isn’t compatible with Native AOT you’ll get warnings when building for Native AOT and there are additional limitations to what works in Native AOT. Using the same app as before, we’ll enable Native AOT by adding <PublishAot>true</PublishAot> to our csproj. TFM AOT App Size Publish duration net7.0 false 88.4MB 3.9 sec net8.0 false 90.9MB 3.9 sec net7.0 true 40MB 71.7 sec net8.0 true 12.6MB 22.7 sec And just like with trimming, we can test the WebApplication APIs that have less defaults enabled. TFM Builder App Size net8.0 Create 12.6MB net8.0 Slim 8.8MB net8.0 Empty 5.7MB net8.0 Empty+Server 7.8MB That’s pretty cool! A small net8.0 app is 90.9MB and when published as Native AOT it’s 12.6MB, or as low as 7.8MB (assuming we want a server, which we probably do). Now let’s take a look at some other performance characteristics of a Native AOT app; startup speed, memory usage, and RPS. In order to properly show E2E benchmark numbers we need to use a multi-machine setup so that the server and client processes don’t steal CPU from each other and we don’t have random processes running like you would for a local machine. I’ll be using our internal benchmarking infrastructure that makes use of the benchmarking tool crank and our aspnet-citrine-win and aspnet-citrine-lin machines for server and load respectively. Both machine specs are described in our benchmarks readme. And finally, I’ll be using an application that uses Minimal APIs to return a json payload. This app uses the Slim builder we showed earlier as well as sets <InvariantGlobalization>true</InvariantGlobalization> in the csproj. If we run the app without any extra settings crank –config https//raw.githubusercontent.com/aspnet/Benchmarks/main/scenarios/goldilocks.benchmarks.yml –config https//raw.githubusercontent.com/aspnet/Benchmarks/main/build/ci.profile.yml –config https//raw.githubusercontent.com/aspnet/Benchmarks/main/scenarios/steadystate.profile.yml –scenario basicminimalapivanilla –profile intel-win-app –profile intel-lin-load –application.framework net8.0 –application.options.collectCounters true This gives us a ~293ms startup time, 444MB working set, and ~762k RPS. If we run the same app but publish it as Native AOT crank –config https//raw.githubusercontent.com/aspnet/Benchmarks/main/scenarios/goldilocks.benchmarks.yml –config https//raw.githubusercontent.com/aspnet/Benchmarks/main/build/ci.profile.yml –config https//raw.githubusercontent.com/aspnet/Benchmarks/main/scenarios/steadystate.profile.yml –scenario basicminimalapipublishaot –profile intel-win-app –profile intel-lin-load –application.framework net8.0 –application.options.collectCounters true We get ~67ms startup time, 56MB working set, and ~681k RPS. That’s ~77% faster startup speed, ~87% lower working set, and ~12% lower RPS. The startup speed is expected because the app has already been optimized, and there is no JIT running to start optimizing code. Also, in non-Native AOT apps, because startup methods are likely only called once, tiered compilation will never run on the startup methods so they won’t be as optimized as they could be, but in NativeAOT the startup method will be fully optimized. The working set is a bit surprising, it is lower because Native AOT apps by default run with the new Dynamic Adaptation To Application Sizes (DATAS) GC. This GC setting tries to maintain a balance between throughput and overall memory usage, which we can see it doing with an ~87% lower working set at the cost of some RPS. You can read more about the new GC setting in Maoni0’s blog. Let’s also compare the Native AOT vs. non-Native AOT apps with the Server GC. So we’ll add --application.environmentVariables DOTNET_GCDynamicAdaptationMode=0 when running the Native AOT app. This time we get ~64ms startup time, 403MB working set, and ~730k RPS. The startup time is still extremely fast because changing the GC doesn’t affect that, our working set is closer to the non-Native AOT app but smaller due in part to not having the JIT compiler loaded and running, and our RPS is closer to the non-Native AOT app because we’re using the Server GC which optimizes throughput more than memory usage. AOT GC Startup Working Set RPS false Server 293ms 444MB 762k false DATAS 303ms 77MB 739k true Server 64ms 403MB 730k true DATAS 67ms 56MB 681k Non-Native AOT apps have the JIT optimizing code while it’s running, and starting in .NET 8 the JIT by default will make use of dynamic PGO, this is a really cool feature that Native AOT isn’t able to benefit from and is one reason non-Native AOT apps can have more throughput than Native AOT apps. You can read more about dynamic PGO in the .NET 8 performance blog. If you’re willing to trade some publish size for potentially more optimized code you can pass /pOptimizationPreference=Speed when building and publishing your Native AOT app. When we do this for our benchmark app (with Server GC) we get a publish size of 9.5MB instead of 8.9MB and 745k RPS instead of 730k. The app we’ve been using makes use of Minimal APIs which by default isn’t trim friendly. It does a lot of reflection and dynamic code generation that isn’t statically analyzable so the trimmer isn’t able to safely trim the app. So why don’t we see warnings when we Native AOT publish this app? Because we wrote a source-generator called Request Delegate Generator (RDG) that replaces your MapGet, MapPost, etc. methods with trim friendly code. This source-generator is automatically used for ASP.NET Core apps when trimming/aot publishing. Which leads us into the next section where we dive into RDG. Request Delegate Generator The Request Delegate Generator (RDG) is a source-generator created to make Minimal APIs trimmer and Native AOT friendly. Without RDG, using Minimal APIs will result in many warnings and your app likely won’t work as expected. Here is a quick example to show an endpoint that will result in an exception when using Native AOT without RDG but will work with RDG enabled (or when not using Native AOT). app.MapGet("/test", (Bindable b) => "Hello world!"); public class Bindable { public static ValueTask<Bindable?> BindAsync(HttpContext context, ParameterInfo parameter) { return new ValueTask<Bindable?>(new Bindable()); } } This app throws when you send a GET request to /test because the Bindable.BindAsync method is referenced via reflection and so the trimmer can’t statically figure out that the method is being used and will remove it. Minimal APIs then sees the MapGet call as needing a request body which isn’t allowed by default for GET calls. Besides fixings warnings and making the app work as expected in Native AOT, we get improved first response time and reduced publish size. Without RDG, the first time a request is made to the app is when all the expression trees are generated for all endpoints in the application. Because RDG generates the source for an endpoint at compile time, there is no expression tree generation needed, the code for a specific endpoint is already available and can execute immediately. If we take the app used earlier for benchmarking AOT and look at time to first request we get ~187ms when not running as AOT and without RDG. We then get ~130ms when we enable RDG. When publishing as AOT, the time to first request is ~60ms regardless of using RDG. But this app only has 2 endpoints, so let’s add 1000 more endpoints and see the difference! 2 Routes AOT RDG First Request Publish Size false false 187ms 97MB false true 130ms 97MB true false 60ms 11.15MB true true 60ms 8.89MB 1002 Routes AOT RDG First Request Publish Size false false 1082ms 97MB false true 176ms 97MB true false 157ms 11.15MB true true 84ms 8.89MB Runtime APIs In this section we’ll be looking at changes that mainly involve updating to use new APIs introduced in .NET 8 in the Base Class Library (BCL). SearchValues dotnet/aspnetcore#45300 by @gfoidl, dotnet/aspnetcore#47459, dotnet/aspnetcore#49114, and dotnet/aspnetcore#49117 all make use of the new SearchValues type which lets these code paths take advantage of optimized search implementations for the specific values being searched for. The SearchValues section of the .NET 8 performance blog explains more details about the different search algorithms used and why this type is so cool! Spans dotnet/aspnetcore#46098 makes use of the new MemoryExtensions.Split(ReadOnlySpan<char> source, Span<Range> destination, char separator) method. This allows certain cases of string.Split(...) to be replaced with a non-allocating version. This saves the string[] allocation as well as the individual string allocations for the items in the string[]. More details on this new API can be seen in the .NET 8 Performance post span section. FrozenDictionary Another new type introduced is FrozenDictionary. This allows constructing a dictionary optimized for read operations at the cost of slower construction. dotnet/aspnetcore#49714 switches a Dictionary in routing to use FrozenDictionary. This dictionary is used when routing an http request to the appropriate endpoint which is almost every request to an application. The following tables show the cost of creating the dictionary vs. frozen dictionary, and then the cost of using a dictionary vs. frozen dictionary respectively. You can see that constructing a FrozenDictionary can be up to 13x slower, but the overall time is still in the micro second range (1/1000th of a millisecond) and the FrozenDictionary is only constructed once for the app. What we all like to see is that the per operation performance of using FrozenDictionary is 2.5x-3.5x faster than a Dictionary! [GroupBenchmarksBy(BenchmarkLogicalGroupRule.ByCategory)] public class JumpTableMultipleEntryBenchmark { private string[] _strings; private int[] _segments; private JumpTable _dictionary; private JumpTable _frozenDictionary; private List<(string text, int _)> _entries; [Params(1000)] public int NumRoutes; [GlobalSetup] public void Setup() { _strings = GetStrings(1000); _segments = new int[1000]; for (var i = 0; i < _strings.Length; i++) { _segments[i] = _strings[i].Length; } var samples = new int[NumRoutes]; for (var i = 0; i < samples.Length; i++) { samples[i] = i * (_strings.Length / NumRoutes); } _entries = new List<(string text, int _)>(); for (var i = 0; i < samples.Length; i++) { _entries.Add((_strings[samples[i]], i)); } _dictionary = new DictionaryJumpTable(0, -1, _entries.ToArray()); _frozenDictionary = new FrozenDictionaryJumpTable(0, -1, _entries.ToArray()); } [BenchmarkCategory("GetDestination"), Benchmark(Baseline = true, OperationsPerInvoke = 1000)] public int Dictionary() { var strings = _strings; var segments = _segments; var destination = 0; for (var i = 0; i < strings.Length; i++) { destination = _dictionary.GetDestination(strings[i], segments[i]); } return destination; } [BenchmarkCategory("GetDestination"), Benchmark(OperationsPerInvoke = 1000)] public int FrozenDictionary() { var strings = _strings; var segments = _segments; var destination = 0; for (var i = 0; i < strings.Length; i++) { destination = _frozenDictionary.GetDestination(strings[i], segments[i]); } return destination; } [BenchmarkCategory("Create"), Benchmark(Baseline = true)] public JumpTable CreateDictionaryJumpTable() => new DictionaryJumpTable(0, -1, _entries.ToArray()); [BenchmarkCategory("Create"), Benchmark] public JumpTable CreateFrozenDictionaryJumpTable() => new FrozenDictionaryJumpTable(0, -1, _entries.ToArray()); private static string[] GetStrings(int count) { var strings = new string[count]; for (var i = 0; i < count; i++) { var guid = Guid.NewGuid().ToString(); // Between 5 and 36 characters var text = guid.Substring(0, Math.Max(5, Math.Min(i, 36))); if (char.IsDigit(text[0])) { // Convert first character to a letter. text = ((char)(text[0] + ('G' - '0'))) + text.Substring(1); } if (i % 2 == 0) { // Lowercase half of them text = text.ToLowerInvariant(); } strings[i] = text; } return strings; } } public abstract class JumpTable { public abstract int GetDestination(string path, int segmentLength); } internal sealed class DictionaryJumpTable JumpTable { private readonly int _defaultDestination; private readonly int _exitDestination; private readonly Dictionary<string, int> _dictionary; public DictionaryJumpTable( int defaultDestination, int exitDestination, (string text, int destination)[] entries) { _defaultDestination = defaultDestination; _exitDestination = exitDestination; _dictionary = entries.ToDictionary(e => e.text, e => e.destination, StringComparer.OrdinalIgnoreCase); } public override int GetDestination(string path, int segmentLength) { if (segmentLength == 0) { return _exitDestination; } var text = path.Substring(0, segmentLength); if (_dictionary.TryGetValue(text, out var destination)) { return destination; } return _defaultDestination; } } internal sealed class FrozenDictionaryJumpTable JumpTable { private readonly int _defaultDestination; private readonly int _exitDestination; private readonly FrozenDictionary<string, int> _dictionary; public FrozenDictionaryJumpTable( int defaultDestination, int exitDestination, (string text, int destination)[] entries) { _defaultDestination = defaultDestination; _exitDestination = exitDestination; _dictionary = entries.ToFrozenDictionary(e => e.text, e => e.destination, StringComparer.OrdinalIgnoreCase); } public override int GetDestination(string path, int segmentLength) { if (segmentLength == 0) { return _exitDestination; } var text = path.Substring(0, segmentLength); if (_dictionary.TryGetValue(text, out var destination)) { return destination; } return _defaultDestination; } } Method NumRoutes Mean Error StdDev Ratio RatioSD CreateDictionaryJumpTable 25 735.797 ns 8.5503 ns 7.5797 ns 1.00 0.00 CreateFrozenDictionaryJumpTable 25 4,677.927 ns 80.4279 ns 71.2972 ns 6.36 0.11 CreateDictionaryJumpTable 50 1,433.309 ns 19.4435 ns 17.2362 ns 1.00 0.00 CreateFrozenDictionaryJumpTable 50 10,065.905 ns 188.7031 ns 176.5130 ns 7.03 0.12 CreateDictionaryJumpTable 100 2,712.224 ns 46.0878 ns 53.0747 ns 1.00 0.00 CreateFrozenDictionaryJumpTable 100 28,397.809 ns 358.2159 ns 335.0754 ns 10.46 0.20 CreateDictionaryJumpTable 1000 28,279.153 ns 424.3761 ns 354.3733 ns 1.00 0.00 CreateFrozenDictionaryJumpTable 1000 313,515.684 ns 6,148.5162 ns 8,208.0925 ns 11.26 0.33 Dictionary 25 21.428 ns 0.1816 ns 0.1516 ns 1.00 0.00 FrozenDictionary 25 7.137 ns 0.0588 ns 0.0521 ns 0.33 0.00 Dictionary 50 21.630 ns 0.1978 ns 0.1851 ns 1.00 0.00 FrozenDictionary 50 7.476 ns 0.0874 ns 0.0818 ns 0.35 0.00 Dictionary 100 23.508 ns 0.3498 ns 0.3272 ns 1.00 0.00 FrozenDictionary 100 7.123 ns 0.0840 ns 0.0745 ns 0.30 0.00 Dictionary 1000 23.761 ns 0.2360 ns 0.2207 ns 1.00 0.00 FrozenDictionary 1000 8.516 ns 0.1508 ns 0.1337 ns 0.36 0.01 Other This section is a compilation of changes that enhance performance but do not fall under any of the preceding categories. Regex As part of the AOT effort, we noticed the regex created in RegexRouteConstraint (see route constraints for more info) was adding ~1MB to the published app size. This is because the route constraints are dynamic (application code defines them) and we were using the Regex constructor that accepts RegexOptions. This meant the trimmer has to keep around all regex code that could potentially be used, including the NonBacktracking engine which keeps ~.8MB of code. By adding RegexOptions.Compiled the trimmer can now see that the NonBacktracking code will not be used and it can reduce the application size by ~.8MB. Additionally, using compiled regexes is faster than using the interpreted regex. The quick fix was to just add RegexOptions.Compiled when creating the Regex which was done in dotnet/aspnetcore#46192 by @eugeneogongo. The problem is that this slows down app startup because we resolve constraints when starting the app and compiled regexes are slower to construct. dotnet/aspnetcore#46323 fixes this by lazily initializing the regexes so app startup is actually faster than 7.0 when we weren’t using compiled regexes. It also added caching to the route constraints which means if you share constraints in multiple routes you will save allocations by sharing constraints across routes. Running a microbenchmark for the route builder to measure startup performance shows an almost 450% improvement when using 1000 routes due to no longer initializing the regexes. The benchmark lives in the dotnet/aspnetcore repo. It has a lot of setup code and would be a bit too long to put in this post. Before with interpreted regexes Method Mean Op/s Gen 0 Gen 1 Allocated Build 6.739 ms 148.4 15.6250 – 7 MB After with compiled and lazy regexes Method Mean Op/s Gen 0 Gen 1 Allocated Build 1.521 ms 657.2 5.8594 1.9531 2 MB Another Regex improvement came from dotnet/aspnetcore#44770 which switched a Regex usage in routing to use the Regex Source Generator. This moves the cost of compiling the Regex to build time, as well as resulting in faster Regex code due to optimizations the source generator takes advantage of that the in-process Regex compiler does not. We’ll show a simplified example that demonstrates using the generated regex vs. the compiled regex. public partial class AlphaRegex { static Regex Net7Constraint = new Regex( @"^[a-z]*$", RegexOptions.CultureInvariant | RegexOptions.Compiled | RegexOptions.IgnoreCase, TimeSpan.FromSeconds(10)); static Regex Net8Constraint = GetAlphaRouteRegex(); [GeneratedRegex(@"^[A-Za-z]*$")] private static partial Regex GetAlphaRouteRegex(); [Benchmark(Baseline = true)] public bool CompiledRegex() { return Net7Constraint.IsMatch("Administration") && Net7Constraint.IsMatch("US"); } [Benchmark] public bool SourceGenRegex() { return Net8Constraint.IsMatch("Administration") && Net8Constraint.IsMatch("US"); } } Method Mean Error StdDev Ratio CompiledRegex 86.92 ns 0.572 ns 0.447 ns 1.00 SourceGenRegex 57.81 ns 0.860 ns 0.805 ns 0.66 Analyzers Analyzers are useful for pointing out issues in code that can be hard to convey in API signatures, suggesting code patterns that are more readable, and they can also suggest more performant ways to write code. dotnet/aspnetcore#44799 and dotnet/aspnetcore#44791 both from @martincostello enabled CA1854 which helps avoid 2 dictionary lookups when only 1 is needed, and dotnet/aspnetcore#44269 enables a bunch of analyzers many of which help use more performant APIs and are described in more detail in last years .NET 7 Performance Post. I would encourage developers who are interested in performance in their own products to checkout performance focused analyzers which contains a list of many analyzers that will help avoid easy to fix performance issues. StringBuilder StringBuilder is an extremely useful class for constructing a string when you either can’t precompute the size of the string to create or want an easy way to construct a string without the complications involved with using string.Create(...). StringBuilder comes with a lot of helpful methods as well as a custom implementation of an InterpolatedStringHandler. What this means is that you can “create” strings to add to the StringBuilder without actually allocating the string. For example, previously you might have written stringBuilder.Append(FormattableString.Invariant($"{key} = {value}"));. This would have allocated a string via FormattableString.Invariant(...) then put it in the StringBuilders internal char[] buffer, making the string a temporary allocation. Instead you can write stringBuilder.Append(CultureInfo.InvariantCulture, $"{key} = {value}");. This also looks like it would allocate a string via $"{key} = {value}", but because StringBuilder has a custom InterpolatedStringHandler the string isn’t actually allocated and instead is written directly to the internal char[]. dotnet/aspnetcore#44691 fixes some usage patterns with StringBuilder to avoid allocations as well as makes use of the InterpolatedStringHandler overload(s). One specific example was taking a byte[] and converting it into a string in hexidecimal format so we could send it as a query string. [MemoryDiagnoser] public class AppendBenchmark { private byte[] _b = new byte[30]; [GlobalSetup] public void Setup() { RandomNumberGenerator.Fill(_b); } [Benchmark] public string AppendToString() { var sb = new StringBuilder(); foreach (var b in _b) { sb.Append(b.ToString("x2", CultureInfo.InvariantCulture)); } return sb.ToString(); } [Benchmark] public string AppendInterpolated() { var sb = new StringBuilder(); foreach (var b in _b) { sb.Append(CultureInfo.InvariantCulture, $"{bx2}"); } return sb.ToString(); } } Method Mean Gen0 Allocated AppendToString 748.7 ns 0.1841 1448 B AppendInterpolated 739.7 ns 0.0620 488 B Summary Thanks for reading! Try out .NET 8 and let us know how your app’s performance has changed! We are always looking for feedback on how to improve the product and look forward to your contributions, be it an issue report or a PR. If you want more performance goodness, you can read the Performance Improvements in .NET 8 post. Also, take a look at Developer Stories which showcases multiple teams at Microsoft migrating from .NET Framework to .NET or to newer versions of .NET and seeing major performance and operating cost wins. The post Performance Improvements in ASP.NET Core 8 appeared first on .NET Blog.


Application startup exception: System.InvalidOpera ...
Category: MVC

Error Application startup exception System.InvalidOperationException Cannot r ...


Views: 865 Likes: 84
could not load System.ComponentModel.AsyncComplete ...
Category: .Net 7

Question How do you resolve "could not load System.ComponentModel.Async ...


Views: 399 Likes: 85
Asp.Net Core GRPC Client Error: the type name "Gre ...
Category: .Net 7

Question How do you resolve the error "the type name "GreeterClient" does not exist in the type ...


Views: 1098 Likes: 91
Upgrading to .Net Core 3.1 Error Visual Studio 201 ...
Category: .Net 7

Problem When upgrading to Dot Net Core 3.1 from .Net Core 3.0 ...


Views: 671 Likes: 106
Asp.Net 5 Development Notes (DotNet Core 3.1 Study ...
Category: Software Development

Study Notes to use when progra ...


Views: 423 Likes: 61
Dotnet 8 publishing Self-Contained Application not ...
Category: Other

Question When I upgrade from Asp.Net 7 to Asp.Net 8, and then publishing the app as a Self-Conta ...


Views: 0 Likes: 9
How to Solve XUnit Automated Testing Error: Castle ...
Category: MVC

How to Solve the XUnity Automated Test Error</s ...


Views: 1715 Likes: 114
How to Program Apex 7000 Cash Acceptor in Asp.Net ...
Category: .Net 7

Programming Apex 7000 Cash Acceptor in Asp.Net Core 3.1 Notes Mak ...


Views: 271 Likes: 86
Error: com.android.build.gradle.tasks.factory.Andr ...
Category: Android

When building a project in android studio and the project is ionic cordova android project, somet ...


Views: 242 Likes: 72
Video: Show a user’s emails in an ASP.NET Core app using Microsoft Graph
Video Show a user’s emails in an ASP.NET Core app ...

I’ve been working a lot with .NET Core and Microsoft Graph lately and decided to put together a short video based on a Microsoft Learn module covering how the technologies can be used together. If you haven’t used Microsoft Graph before, it provides a secure, unified API to access organizational data and intelligence (data stored in Microsoft 365 for example). So why would you ever want to access a signed in user’s emails and include them in your custom app? The simple answer is, “Bring organizational data where your users need it everyday!”. Instead of users switching from your app to find a relevant email, calendar event, Microsoft Teams chat (and more) by jumping between various productivity apps, you can pull that type of data directly into your custom app. This allows users to work more efficiently and make more informed decisions all while minimizing context shifts. In this video I’ll introduce you to The role of security in making Microsoft Graph calls.Microsoft Identity and Microsoft Graph Middleware configuration.The role of permissions/scopes and access tokens.The Microsoft Graph .NET Core SDK and how it can be used.How to create reusable classes that handle making Microsoft Graph calls.Dependency injection and how it can be used to access a GraphServiceClient object.How to retrieve a batch of email messages using the UserMessagesCollectionRequest class. Show a user’s emails in an ASP.NET Core app using Microsoft Graph


HTTP Error 500.19 Internal server error
Category: Servers

Question How do you solve for e ...


Views: 0 Likes: 41
InvalidOperationException: <ClassName xmlns='http: ...
Category: .Net 7

Question How do you solve error when Deserializing XML in C# Asp.Net 5 "Invali ...


Views: 90 Likes: 75
How to use Whisper AI using ONNX in C#
Category: Research

IntroductionWhisper AI is an open-source speech recognition model developed by Google th ...


Views: 0 Likes: 22
Reset passwords in ASP.NET Core using delegated permissions and Microsoft Graph
Reset passwords in ASP.NET Core using delegated pe ...

This article shows how an administrator can reset passwords for local members of an Azure AD tenant using Microsoft Graph and delegated permissions. An ASP.NET Core application is used to implement the Azure AD client and the Graph client services. Code https//github.com/damienbod/azuerad-reset Setup Azure App registration The Azure App registration is setup to authenticate with a user and an application (delegated flow). An App registration “Web” setup is used. Only delegated permissions are used in this setup. This implements an OpenID Connect code flow with PKCE and a confidential client. A secret or a certificate is required for this flow. The following delegated Graph permissions are used Directory.AccessAsUser.All User.ReadWrite.All UserAuthenticationMethod.ReadWrite.All ASP.NET Core setup The ASP.NET Core application implements the Azure AD client using the Microsoft.Identity.Web Nuget package and libraries. The following packages are used Microsoft.Identity.Web Microsoft.Identity.Web.UI Microsoft.Identity.Web.GraphServiceClient (SDK5) or Microsoft.Identity.Web.MicrosoftGraph (SDK4) Microsoft Graph is not added directly because the Microsoft.Identity.Web.MicrosoftGraph or Microsoft.Identity.Web.GraphServiceClient adds this with a tested and working version. Microsoft.Identity.Web uses the Microsoft.Identity.Web.GraphServiceClient package for Graph SDK 5. Microsoft.Identity.Web.MicrosoftGraph uses Microsoft.Graph 4.x versions. The official Microsoft Graph documentation is already updated to SDK 5. For application permissions, Microsoft Graph SDK 5 can be used with Azure.Identity. Search for users with Graph SDK 5 and resetting the password The Graph SDK 5 can be used to search for users and reset the user using a delegated scope and then to reset the password using the Patch HTTP request. The Graph QueryParameters are used to find the user and the HTTP Patch is used to update the password using the PasswordProfile. using System.Security.Cryptography; using Microsoft.Graph; using Microsoft.Graph.Models; namespace AzureAdPasswordReset; public class UserResetPasswordDelegatedGraphSDK5 { private readonly GraphServiceClient _graphServiceClient; public UserResetPasswordDelegatedGraphSDK5(GraphServiceClient graphServiceClient) { _graphServiceClient = graphServiceClient; } /// <summary> /// Directory.AccessAsUser.All User.ReadWrite.All UserAuthenticationMethod.ReadWrite.All /// </summary> public async Task<(string? Upn, string? Password)> ResetPassword(string oid) { var password = GetRandomString(); var user = await _graphServiceClient .Users[oid] .GetAsync(); if (user == null) { throw new ArgumentNullException(nameof(oid)); } await _graphServiceClient.Users[oid].PatchAsync( new User { PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = true } }); return (user.UserPrincipalName, password); } public async Task<UserCollectionResponse?> FindUsers(string search) { var result = await _graphServiceClient.Users.GetAsync((requestConfiguration) => { requestConfiguration.QueryParameters.Top = 10; if (!string.IsNullOrEmpty(search)) { requestConfiguration.QueryParameters.Search = $"\"displayName{search}\""; } requestConfiguration.QueryParameters.Orderby = new string[] { "displayName" }; requestConfiguration.QueryParameters.Count = true; requestConfiguration.QueryParameters.Select = new string[] { "id", "displayName", "userPrincipalName", "userType" }; requestConfiguration.QueryParameters.Filter = "userType eq 'Member'"; // onPremisesSyncEnabled eq false requestConfiguration.Headers.Add("ConsistencyLevel", "eventual"); }); return result; } private static string GetRandomString() { var random = $"{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}-AC"; return random; } private static int GenerateRandom() { return RandomNumberGenerator.GetInt32(100000000, int.MaxValue); } } Search for users SDK 4 The application allows the user administration to search for members of the Azure AD tenant and finds users using a select and a filter definition. The search query parameter would probably return a better user experience. public async Task<IGraphServiceUsersCollectionPage?> FindUsers(string search) { var users = await _graphServiceClient.Users.Request() .Filter($"startswith(displayName,'{search}') AND userType eq 'Member'") .Select(u => new { u.Id, u.GivenName, u.Surname, u.DisplayName, u.Mail, u.EmployeeId, u.EmployeeType, u.BusinessPhones, u.MobilePhone, u.AccountEnabled, u.UserPrincipalName }) .GetAsync(); return users; } The ASP.NET Core Razor page supports an auto complete using the OnGetAutoCompleteSuggest method. This returns the found results using the Graph request. private readonly UserResetPasswordDelegatedGraphSDK4 _graphUsers; public string? SearchText { get; set; } public IndexModel(UserResetPasswordDelegatedGraphSDK4 graphUsers) { _graphUsers = graphUsers; } public async Task<ActionResult> OnGetAutoCompleteSuggest(string term) { if (term == "*") term = string.Empty; var usersCollectionResponse = await _graphUsers.FindUsers(term); var users = usersCollectionResponse!.ToList(); var usersDisplay = users.Select(user => new { user.Id, user.UserPrincipalName, user.DisplayName }); SearchText = term; return new JsonResult(usersDisplay); } The Razor Page can be implemented using Bootstrap or whatever CSS framework you prefer. Reset the password for user X using Graph SDK 4 The Graph service supports reset a password using a delegated permission. The user is requested using the OID and a new PasswordProfile is created updating the password and forcing a one time usage. /// <summary> /// Directory.AccessAsUser.All /// User.ReadWrite.All /// UserAuthenticationMethod.ReadWrite.All /// </summary> public async Task<(string? Upn, string? Password)> ResetPassword(string oid) { var password = GetRandomString(); var user = await _graphServiceClient.Users[oid] .Request().GetAsync(); if (user == null) { throw new ArgumentNullException(nameof(oid)); } await _graphServiceClient.Users[oid].Request() .UpdateAsync(new User { PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = true } }); return (user.UserPrincipalName, password); } The Razor Page sends a post request and resets the password using the user principal name. public async Task<IActionResult> OnPostAsync() { var id = Request.Form .FirstOrDefault(u => u.Key == "userId") .Value.FirstOrDefault(); var upn = Request.Form .FirstOrDefault(u => u.Key == "userPrincipalName") .Value.FirstOrDefault(); if(!string.IsNullOrEmpty(id)) { var result = await _graphUsers.ResetPassword(id); Upn = result.Upn; Password = result.Password; return Page(); } return Page(); } Running the application When the application is started, a user password can be reset and updated. It is important to block this function for non-authorized users as it is possible to reset any account without further protection. You could PIM this application using an azure AD security group or something like this. Notes Using Graph SDK 4 is hard as almost no docs now exist, Graph has moved to version 5. Microsoft Graph SDK 5 has many breaking changes and is supported by Microsoft.Identity.Web using the Microsoft.Identity.Web.GraphServiceClient package. High user permissions are used in this and it is important to protection this API or the users that can use the application. Links https//aka.ms/mysecurityinfo https//learn.microsoft.com/en-us/graph/api/overview?view=graph-rest-1.0 https//learn.microsoft.com/en-us/graph/sdks/paging?tabs=csharp https//github.com/AzureAD/microsoft-identity-web/blob/jmprieur/Graph5/src/Microsoft.Identity.Web.GraphServiceClient/Readme.md https//learn.microsoft.com/en-us/graph/api/authenticationmethod-resetpassword?view=graph-rest-1.0&tabs=csharp


MSBUILD : error MSB1011: Specify which project or ...
Category: Other

Question How do you resolve the error "MSBUILD error MSB1011 Specify which project or solutio ...


Views: 0 Likes: 8
How you parse a collection of email addresses with ...
Category: Technology

Question How do you parse an email with invalide characters in MailKit when using C#? An ...


Views: 556 Likes: 57
how to use speech in asp.net core
Category: .Net 7

Question How do you use Speech ...


Views: 268 Likes: 116
Issue and verify BBS+ verifiable credentials using ASP.NET Core and trinsic.id
Issue and verify BBS+ verifiable credentials using ...

This article shows how to implement identity verification in a solution using ASP.NET Core and trinsic.id, built using an id-tech solution based on self sovereign identity principals. The credential issuer uses OpenID Connect to authenticate, implemented using Microsoft Entra ID. The edge or web wallet authenticates using trinsic.id based on a single factor email code. The verifier needs no authentication, it only verifies that the verifiable credential is authentic and valid. The verifiable credentials uses JSON-LD ZKP with BBS+ Signatures and selective disclosure to verify. Code https//github.com/swiss-ssi-group/TrinsicV2AspNetCore Use Case As a university administrator, I want to create BBS+ verifiable credentials templates for university diplomas. As a student, I want to authenticate using my university account (OpenID Connect Code flow with PKCE), create my verifiable credential and download this credential to my SSI Wallet. As an HR employee, I want to verify the job candidate has a degree from this university. The HR employee needs verification but does not require to see the data. The sample creates university diplomas as verifiable credentials with BBS+ signatures. The credentials are issued using OIDC with strong authentication (If possible) to a candidate mobile wallet. The issuer uses a trust registry so that the verifier can validate that the VC is authentic. The verifier uses BBS+ selective disclosure to validate a diploma. (ZKP is not possible at present) The university application requires an admin zone and a student zone. Setup The university application plays the role of credential issuer and has it’s own users. Trinsic.id wallet is used to store the verifiable credentials for students. Company X HR plays the role of verifier and has no connection to the university. The company has a list of trusted universities. (DIDs) Issuing a credential The ASP.NET Core issuer application can create a new issuer web (edge) wallet, create templates and issue credentials using the template. The trinsic.id .NET SDK is used to implement the SSI or id-tech integration. How and where trinsic.id store the data is unknown, and you must trust them to do this correctly if you use their solution. Implementing your own ledger or identity layer is at present too much effort and so the best option is to use one of the integration solutions. The UI is implemented using ASP.NET Core Razor pages and the Microsoft.Identity.Web Nuget package with Microsoft Entra ID for the authentication. The required issuer template is used to create the credential offer. var diploma = await _universityServices.GetUniversityDiploma(DiplomaTemplateId); var tid = Convert.ToInt32(DiplomaTemplateId, CultureInfo.InvariantCulture); var response = await _universityServices .IssuerStudentDiplomaCredentialOffer(diploma, tid); CredentialOfferUrl = response!.ShareUrl; The IssuerStudentDiplomaCredentialOffer method uses the University eco system and issuer a new offer using it’s template. public async Task<CreateCredentialOfferResponse?> IssuerStudentDiplomaCredentialOffer (Diploma diploma, int universityDiplomaTemplateId) { var templateId = await GetUniversityDiplomaTemplateId(universityDiplomaTemplateId); // get the template from the id-tech solution var templateResponse = await GetUniversityDiplomaTemplate(templateId!); // Auth token from University issuer wallet _trinsicService.Options.AuthToken = _configuration["TrinsicOptionsIssuerAuthToken"]; var response = await _trinsicService .Credential.CreateCredentialOfferAsync( new CreateCredentialOfferRequest { TemplateId = templateResponse.Template.Id, ValuesJson = JsonSerializer.Serialize(diploma), GenerateShareUrl = true }); return response; } The UI returns the offer and displays this as a QR code which can be opened and starts to process to add the credential to the web wallet. This is a magic link and not an SSI OpenID for Verifiable Credential Issuance flow or the starting point for a Didcomm V2 flow. Starting any flow using a QR Code is unsafe and further protection is required. Using an authenticated application with a phishing resistant authentication reduces this risk or using OpenID for Verifiable Credential Issuance VC issuing. Creating a proof in the wallet Once the credential is in the holders wallet, a proof can be created from it. The proof can be used in a verifier application. The wallet authentication of the user is implemented using a single factor email code and creates a selective proof which can be copied to the verifier web application. OpenID for Verifiable Presentations for verifiers cannot be used because there is no connection between the issuer and the verifier. This could be possible if the process was delegated to the wallet using some type of magic URL, string etc. The wallet can authenticate against any known eco systems. public class GenerateProofService { private readonly TrinsicService _trinsicService; private readonly IConfiguration _configuration; public List<SelectListItem> Universities = new(); public GenerateProofService(TrinsicService trinsicService, IConfiguration configuration) { _trinsicService = trinsicService; _configuration = configuration; Universities = _configuration.GetSection("Universities")!.Get<List<SelectListItem>>()!; } public async Task<List<SelectListItem>> GetItemsInWallet(string userAuthToken) { var results = new List<SelectListItem>(); // Auth token from user _trinsicService.Options.AuthToken = userAuthToken; // get all items var items = await _trinsicService.Wallet.SearchWalletAsync(new SearchRequest()); foreach (var item in items.Items) { var jsonObject = JsonNode.Parse(item)!; var id = jsonObject["id"]; var vcArray = jsonObject["data"]!["type"]; var vc = string.Empty; foreach (var i in vcArray!.AsArray()) { var val = i!.ToString(); if (val != "VerifiableCredential") { vc = val!.ToString(); break; } } results.Add(new SelectListItem(vc, id!.ToString())); } return results; } public async Task<CreateProofResponse> CreateProof(string userAuthToken, string credentialItemId) { // Auth token from user _trinsicService.Options.AuthToken = userAuthToken; var selectiveProof = await _trinsicService.Credential.CreateProofAsync(new() { ItemId = credentialItemId, RevealTemplate = new() { TemplateAttributes = { "firstName", "lastName", "dateOfBirth", "diplomaTitle" } } }); return selectiveProof; } public AuthenticateInitResponse AuthenticateInit(string userId, string universityEcosystemId) { var requestInit = new AuthenticateInitRequest { Identity = userId, Provider = IdentityProvider.Email, EcosystemId = universityEcosystemId }; var authenticateInitResponse = _trinsicService.Wallet.AuthenticateInit(requestInit); return authenticateInitResponse; } public AuthenticateConfirmResponse AuthenticateConfirm(string code, string challenge) { var requestConfirm = new AuthenticateConfirmRequest { Challenge = challenge, Response = code }; var authenticateConfirmResponse = _trinsicService.Wallet.AuthenticateConfirm(requestConfirm); return authenticateConfirmResponse; } } The wallet connect screen can look some like this The proof can be created using one of the credentials and the proof can be copied. Verifying the proof The verifier can use the proof to validate the required information. The verifier has no connection to the issuer, it is in a different eco system. Because of this, the verifier must validate the issuer DID. This must be a trusted issuer (university). public class DiplomaVerifyService { private readonly TrinsicService _trinsicService; private readonly IConfiguration _configuration; public List<SelectListItem> TrustedUniversities = new(); public List<SelectListItem> TrustedCredentials = new(); public DiplomaVerifyService(TrinsicService trinsicService, IConfiguration configuration) { _trinsicService = trinsicService; _configuration = configuration; TrustedUniversities = _configuration.GetSection("TrustedUniversities")!.Get<List<SelectListItem>>()!; TrustedCredentials = _configuration.GetSection("TrustedCredentials")!.Get<List<SelectListItem>>()!; } public async Task<(VerifyProofResponse? Proof, bool IsValid)> Verify(string studentProof, string universityIssuer) { // Verifiers auth token // Auth token from trinsic.id root API KEY provider _trinsicService.Options.AuthToken = _configuration["TrinsicCompanyXHumanResourcesOptionsApiKey"]; var verifyProofResponse = await _trinsicService.Credential.VerifyProofAsync(new VerifyProofRequest { ProofDocumentJson = studentProof, }); var jsonObject = JsonNode.Parse(studentProof)!; var issuer = jsonObject["issuer"]; // check issuer if (universityIssuer != issuer!.ToString()) { return (null, false); } return (verifyProofResponse, true); } } The ASP.NET Core UI could look something like this Notes Using a SSI based solution to share data securely across domains or eco systems can be very useful and opens up many business possibilities. SSI or id-tech is a good solution for identity checks and credential checks, it is not a good solution for authentication. Phishing is hard to solve in cross device environments. Passkeys or FIDO2 is the way forward for user authentication. The biggest problem for SSI and id-tech is the interoperability between solutions. For example, the following credential types exist and are only useable on specific solutions JSON JWT SD-JWT (JSON-LD with LD Signatures) AnonCreds with CL Signatures JSON-LD ZKP with BBS+ Signatures mDL ISO ISO/IEC 18013-5 There are multiple standards, multiple solutions, multiple networks and multiple ledgers. No two systems seem to work with each other. In a closed eco system, it will work, but SSI has few advantages over existing solutions in a closed eco system. Interoperability needs to be solved. Links https//dashboard.trinsic.id/ecosystem https//github.com/trinsic-id/sdk https//docs.trinsic.id/dotnet/ Integrating Verifiable Credentials in 2023 – Trinsic Platform Walkthrough https//openid.net/specs/openid-4-verifiable-credential-issuance-1_0.html https//openid.net/specs/openid-4-verifiable-presentations-1_0.html https//openid.net/specs/openid-connect-self-issued-v2-1_0.html https//datatracker.ietf.org/doc/draft-ietf-oauth-selective-disclosure-jwt/


Asp.Net Core 3.1 Dependency Injection Error Unable ...
Category: .Net 7

Error Unable to resolve service for ty ...


Views: 2513 Likes: 114
Asp.Net Core 3.1 2020 Conference Notes
Category: Education

Focusing on MicroServices</spa ...


Views: 326 Likes: 99
Asp.Net Core 3.1 Error  The 'inject' directive exp ...
Category: .Net 7

Problem Severity Code Description Project File Line Suppression StateErro ...


Views: 1828 Likes: 94
How to change Application Start URL for Asp.Net Co ...
Category: RESEARCH

Introduction &nbsp; In an ASP.NET Core application, the start URL is the first p ...


Views: 0 Likes: 3
Auto sign-out using ASP.NET Core Razor Pages with Azure AD B2C
Auto sign-out using ASP.NET Core Razor Pages with ...

This article shows how an ASP.NET Core Razor Page application could implement an automatic sign-out when a user does not use the application for n-minutes. The application is secured using Azure AD B2C. To remove the session, the client must sign-out both on the ASP.NET Core application and the Azure AD B2C identity provider or whatever identity provider you are using. Code https//github.com/damienbod/AspNetCoreB2cLogout Sometimes clients require that an application supports automatic sign-out in a SSO environment. An example of this is when a user uses a shared computer and does not click the sign-out button. The session would remain active for the next user. This method is not fool proof as the end user could save the credentials in the browser. If you need a better solution, then SSO and rolling sessions should be avoided but this leads to a worse user experience. The ASP.NET Core application is protected using Microsoft.Identity.Web. This takes care of the client authentication flows using Azure AD B2C as the identity provider. Once authenticated, the session is stored in a cookie. A distributed cache is added to record the last activity of of each user. An IAsyncPageFilter implementation is used and added as a global filter to all requests for Razor Pages. The SessionTimeoutAsyncPageFilter class implements the IAsyncPageFilter interface. builder.Services.AddDistributedMemoryCache(); builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration, "AzureAdB2c" ) .EnableTokenAcquisitionToCallDownstreamApi(Array.Empty<string>()) .AddDistributedTokenCaches(); builder.Services.AddAuthorization(options => { options.FallbackPolicy = options.DefaultPolicy; }); builder.Services.AddSingleton<SessionTimeoutAsyncPageFilter>(); builder.Services.AddRazorPages() .AddMvcOptions(options => { options.Filters.Add(typeof(SessionTimeoutAsyncPageFilter)); }) .AddMicrosoftIdentityUI(); The IAsyncPageFilter interface is used to catch the request for the Razor Pages. The OnPageHandlerExecutionAsync method is used to implement the automatic end session logic. We use the default name identifier claim type to get an ID for the user. If using the standard claims instead of the Microsoft namespace mapping, this would be different. Match the claim returned in the id_token from the OpenID Connect authentication. I check for idle time. If no requests was sent in the last n-minutes, the application will sign-out, in both the local cookie and also on Azure AD B2C. It is important to sign-out on the identity provider as well. If the idle time is less than the allowed time span, the DateTime timestamp is persisted to cache. public async Task OnPageHandlerExecutionAsync(PageHandlerExecutingContext context, PageHandlerExecutionDelegate next) { var claimTypes = "http//schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"; var name = context.HttpContext .User .Claims .FirstOrDefault(c => c.Type == claimTypes)! .Value; if (name == null) throw new ArgumentNullException(nameof(name)); var lastActivity = GetFromCache(name); if (lastActivity != null && lastActivity.GetValueOrDefault() .AddMinutes(timeoutInMinutes) < DateTime.UtcNow) { await context.HttpContext .SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme); await context.HttpContext .SignOutAsync(OpenIdConnectDefaults.AuthenticationScheme); } AddUpdateCache(name); await next.Invoke(); } Distributed cache is used to persist the user idle time from each session. This might be expensive for applications with many users. In this demo, the UTC now value is used for the check. This might need to be improved and the cache length as well. This needs to be validated. if this is enough for all different combinations of timeout. private void AddUpdateCache(string name) { var options = new DistributedCacheEntryOptions() .SetSlidingExpiration(TimeSpan .FromDays(cacheExpirationInDays)); _cache.SetString(name, DateTime .UtcNow.ToString("s"), options); } private DateTime? GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return DateTime.Parse(item); } return null; } When the session timeouts, the code executes the OnPageHandlerExecutionAsync method and signouts. This works for Razor Pages. This is not the only way of supporting this and it is not an easy requirement to fully implement. Next step would be to support this from SPA UIs which send Javascript or ajax requests. Links https//learn.microsoft.com/en-us/azure/active-directory-b2c/openid-connect#send-a-sign-out-request https//learn.microsoft.com/en-us/aspnet/core/razor-pages/filter?view=aspnetcore-7.0 https//github.com/AzureAD/microsoft-identity-web


Asp.Net Core 3.1 Razor Error: CS1003: Syntax erro ...
Category: .Net 7

Question How do you solve Asp.Net Core Razor Error&nbsp; CS1003 Syntax error, ...


Views: 613 Likes: 104
DNS Host File Not Working (ASP.Net Core 3.1)
Category: .Net 7

Problem When developing for Web, you might be in a situetion where you need to ...


Views: 579 Likes: 89
What is ConfigureAwait(true) in Asp.Net Core 3.1 C ...
Category: .Net 7

When to Use ConfigureAwait() in Asp.net Core 3.1 Code<br ...


Views: 242 Likes: 95
[Solved] Asp.Net Core 3.1 config.GetSection return ...
Category: .Net 7

Question How do you resolve IConfiguration config get value from appSettings.js ...


Views: 2590 Likes: 117
Visual Studio Build Error: Found conflicts betwee ...
Category: Technology

&nbsp;Question How do you resolve Visual Studio 2019 Build Error Found conflic ...


Views: 894 Likes: 82
Entity Framework Core Error: To change the IDENTIT ...
Category: Entity Framework

Question How do you resolve the Entity ...


Views: 2404 Likes: 105
How to Debug Asp.Net Core 3.1 in IIS Development E ...
Category: .Net 7

Question How do you troubleshoot IIS Asp.Net Core 3.1 that is running in Produc ...


Views: 461 Likes: 132
Asp.Net Core 3.1 Not Returning NoContent() or BadR ...
Category: .Net 7

&nbsp; &nbsp; &nbsp; When developing in Asp.Net Core 3.1 you might want to code your algorithms t ...


Views: 663 Likes: 107
error : MSB4803: The task "ResolveComReference" is ...
Category: .Net 7

How do I resolve this error? "C\Program Files\d ...


Views: 0 Likes: 61
SignalR Error in Dot Net Core 3.1 (.Net Core 3.1) ...
Category: .Net 7

Problems when implementing SignalR in Dot Net Core 3.1 (.Net Core 3.1) Error Failed to invoke 'H ...


Views: 2201 Likes: 100
How to Call a Soap Web Service from Asp.Net Core
Category: .Net 7

Question How do you call a web service that uses XML Soap Envelope as Payload B ...


Views: 725 Likes: 106
Docker on Raspberry Pi Error .Net Core 3.1 Error: ...
Category: Docker

How do you resolve this error Docker on Raspberry Pi Error .Net Core 3.1 Error ...


Views: 924 Likes: 119
Provision Azure IoT Hub devices using DPS and X.509 certificates in ASP.NET Core
Provision Azure IoT Hub devices using DPS and X.50 ...

This article shows how to provision Azure IoT hub devices using Azure IoT hub device provisioning services (DPS) and ASP.NET Core. The devices are setup using chained certificates created using .NET Core and managed in the web application. The data is persisted in a database using EF Core and the certificates are generated using the CertificateManager Nuget package. Code https//github.com/damienbod/AzureIoTHubDps Setup To setup a new Azure IoT Hub DPS, enrollment group and devices, the web application creates a new certificate using an ECDsa private key and the .NET Core APIs. The data is stored in two pem files, one for the public certificate and one for the private key. The pem public certificate file is downloaded from the web application and uploaded to the certificates blade in Azure IoT Hub DPS. The web application persists the data to a database using EF Core and SQL. A new certificate is created from the DPS root certificate and used to create a DPS enrollment group. The certificates are chained from the original DPS certificate. New devices are registered and created using the enrollment group. Another new device certificate chained from the enrollment group certificate is created per device and used in the DPS. The Azure IoT Hub DPS creates a new IoT Hub device using the linked IoT Hubs. Once the IoT hub is running, the private key from the device certificate is used to authenticate the device and send data to the server. When the ASP.NET Core web application is started, users can create new certificates, enrollment groups and add devices to the groups. I plan to extend the web application to add devices, delete devices, and delete groups. I plan to add authorization for the different user types and better paging for the different UIs. At present all certificates use ECDsa private keys but this can easily be changed to other types. This depends on the type of root certificate used. The application is secured using Microsoft.Identity.Web and requires an authenticated user. This can be setup in the program file or in the startup extensions. I use EnableTokenAcquisitionToCallDownstreamApi to force the OpenID Connect code flow. The configuration is read from the default AzureAd app.settings and the whole application is required to be authenticated. When the enable and disable flows are added, I will add different users with different authorization levels. builder.Services.AddDistributedMemoryCache(); builder.Services.AddAuthentication( OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp( builder.Configuration.GetSection("AzureAd")) .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches(); Create an Azure IoT Hub DPS certificate The web application is used to create devices using certificates and DPS enrollment groups. The DpsCertificateProvider class is used to create the root self signed certificate for the DPS enrollment groups. The NewRootCertificate from the CertificateManager Nuget package is used to create the certificate using an ECDsa private key. This package wraps the default .NET APIs for creating certificates and adds a layer of abstraction. You could just use the lower level APIs directly. The certificate is exported to two separate pem files and persisted to the database. public class DpsCertificateProvider { private readonly CreateCertificatesClientServerAuth _createCertsService; private readonly ImportExportCertificate _iec; private readonly DpsDbContext _dpsDbContext; public DpsCertificateProvider(CreateCertificatesClientServerAuth ccs, ImportExportCertificate importExportCertificate, DpsDbContext dpsDbContext) { _createCertsService = ccs; _iec = importExportCertificate; _dpsDbContext = dpsDbContext; } public async Task<(string PublicPem, int Id)> CreateCertificateForDpsAsync(string certName) { var certificateDps = _createCertsService.NewRootCertificate( new DistinguishedName { CommonName = certName, Country = "CH" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, 3, certName); var publicKeyPem = _iec.PemExportPublicKeyCertificate(certificateDps); string pemPrivateKey = string.Empty; using (ECDsa? ecdsa = certificateDps.GetECDsaPrivateKey()) { pemPrivateKey = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{certName}-private.pem", pemPrivateKey); } var item = new DpsCertificate { Name = certName, PemPrivateKey = pemPrivateKey, PemPublicKey = publicKeyPem }; _dpsDbContext.DpsCertificates.Add(item); await _dpsDbContext.SaveChangesAsync(); return (publicKeyPem, item.Id); } public async Task<List<DpsCertificate>> GetDpsCertificatesAsync() { return await _dpsDbContext.DpsCertificates.ToListAsync(); } public async Task<DpsCertificate?> GetDpsCertificateAsync(int id) { return await _dpsDbContext.DpsCertificates.FirstOrDefaultAsync(item => item.Id == id); } } Once the root certificate is created, you can download the public pem file from the web application and upload it to the Azure IoT Hub DPS portal. This needs to be verified. You could also use a CA created certificate for this, if it is possible to create child chained certificates. The enrollment groups are created from this root certificate. Create an Azure IoT Hub DPS enrollment group Devices can be created in different ways in the Azure IoT Hub. We use a DPS enrollment group with certificates to create the Azure IoT devices. The DpsEnrollmentGroupProvider is used to create the enrollment group certificate. This uses the root certificate created in the previous step and chains the new group certificate from this. The enrollment group is used to add devices. Default values are defined for the enrollment group and the pem files are saved to the database. The root certificate is read from the database and the chained enrollment group certificate uses an ECDsa private key like the root self signed certificate. The CreateEnrollmentGroup method is used to set the initial values of the IoT Hub Device. The ProvisioningStatus is set to enabled. This means when the device is registered, it will be enabled to send messages. You could also set this to disabled and enable it after when the device gets used by an end client for the first time. A MAC or a serial code from the device hardware could be used to enable the IoT Hub device. By waiting till the device is started by the end client, you could choose a IoT Hub optimized for this client. public class DpsEnrollmentGroupProvider { private IConfiguration Configuration { get;set;} private readonly ILogger<DpsEnrollmentGroupProvider> _logger; private readonly DpsDbContext _dpsDbContext; private readonly ImportExportCertificate _iec; private readonly CreateCertificatesClientServerAuth _createCertsService; private readonly ProvisioningServiceClient _provisioningServiceClient; public DpsEnrollmentGroupProvider(IConfiguration config, ILoggerFactory loggerFactory, ImportExportCertificate importExportCertificate, CreateCertificatesClientServerAuth ccs, DpsDbContext dpsDbContext) { Configuration = config; _logger = loggerFactory.CreateLogger<DpsEnrollmentGroupProvider>(); _dpsDbContext = dpsDbContext; _iec = importExportCertificate; _createCertsService = ccs; _provisioningServiceClient = ProvisioningServiceClient.CreateFromConnectionString( Configuration.GetConnectionString("DpsConnection")); } public async Task<(string Name, int Id)> CreateDpsEnrollmentGroupAsync( string enrollmentGroupName, string certificatePublicPemId) { _logger.LogInformation("Starting CreateDpsEnrollmentGroupAsync..."); _logger.LogInformation("Creating a new enrollmentGroup..."); var dpsCertificate = _dpsDbContext.DpsCertificates .FirstOrDefault(t => t.Id == int.Parse(certificatePublicPemId)); var rootCertificate = X509Certificate2.CreateFromPem( dpsCertificate!.PemPublicKey, dpsCertificate.PemPrivateKey); // create an intermediate for each group var certName = $"{enrollmentGroupName}"; var certDpsGroup = _createCertsService.NewIntermediateChainedCertificate( new DistinguishedName { CommonName = certName, Country = "CH" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, 2, certName, rootCertificate); // get the public key certificate for the enrollment var pemDpsGroupPublic = _iec.PemExportPublicKeyCertificate(certDpsGroup); string pemDpsGroupPrivate = string.Empty; using (ECDsa? ecdsa = certDpsGroup.GetECDsaPrivateKey()) { pemDpsGroupPrivate = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{enrollmentGroupName}-private.pem", pemDpsGroupPrivate); } Attestation attestation = X509Attestation.CreateFromRootCertificates(pemDpsGroupPublic); EnrollmentGroup enrollmentGroup = CreateEnrollmentGroup(enrollmentGroupName, attestation); _logger.LogInformation("{enrollmentGroup}", enrollmentGroup); _logger.LogInformation("Adding new enrollmentGroup..."); EnrollmentGroup enrollmentGroupResult = await _provisioningServiceClient .CreateOrUpdateEnrollmentGroupAsync(enrollmentGroup); _logger.LogInformation("EnrollmentGroup created with success."); _logger.LogInformation("{enrollmentGroupResult}", enrollmentGroupResult); DpsEnrollmentGroup newItem = await PersistData(enrollmentGroupName, dpsCertificate, pemDpsGroupPublic, pemDpsGroupPrivate); return (newItem.Name, newItem.Id); } private async Task<DpsEnrollmentGroup> PersistData(string enrollmentGroupName, DpsCertificate dpsCertificate, string pemDpsGroupPublic, string pemDpsGroupPrivate) { var newItem = new DpsEnrollmentGroup { DpsCertificateId = dpsCertificate.Id, Name = enrollmentGroupName, DpsCertificate = dpsCertificate, PemPublicKey = pemDpsGroupPublic, PemPrivateKey = pemDpsGroupPrivate }; _dpsDbContext.DpsEnrollmentGroups.Add(newItem); dpsCertificate.DpsEnrollmentGroups.Add(newItem); await _dpsDbContext.SaveChangesAsync(); return newItem; } private static EnrollmentGroup CreateEnrollmentGroup(string enrollmentGroupName, Attestation attestation) { return new EnrollmentGroup(enrollmentGroupName, attestation) { ProvisioningStatus = ProvisioningStatus.Enabled, ReprovisionPolicy = new ReprovisionPolicy { MigrateDeviceData = false, UpdateHubAssignment = true }, Capabilities = new DeviceCapabilities { IotEdge = false }, InitialTwinState = new TwinState( new TwinCollection("{ \"updatedby\"\"" + "damien" + "\", \"timeZone\"\"" + TimeZoneInfo.Local.DisplayName + "\" }"), new TwinCollection("{ }") ) }; } public async Task<List<DpsEnrollmentGroup>> GetDpsGroupsAsync(int? certificateId = null) { if (certificateId == null) { return await _dpsDbContext.DpsEnrollmentGroups.ToListAsync(); } return await _dpsDbContext.DpsEnrollmentGroups .Where(s => s.DpsCertificateId == certificateId).ToListAsync(); } public async Task<DpsEnrollmentGroup?> GetDpsGroupAsync(int id) { return await _dpsDbContext.DpsEnrollmentGroups .FirstOrDefaultAsync(d => d.Id == id); } } Register a device in the enrollment group The DpsRegisterDeviceProvider class creates a new device chained certificate using the enrollment group certificate and creates this using the ProvisioningDeviceClient. The transport ProvisioningTransportHandlerAmqp is set in this example. There are different transport types possible and you need to chose the one which best meets your needs. The device certificate uses an ECDsa private key and stores everything to the database. The PFX for windows is stored directly to the file system. I use pem files and create the certificate from these in the device client sending data to the hub and this is platform independent. The create PFX file requires a password to use it. public class DpsRegisterDeviceProvider { private IConfiguration Configuration { get; set; } private readonly ILogger<DpsRegisterDeviceProvider> _logger; private readonly DpsDbContext _dpsDbContext; private readonly ImportExportCertificate _iec; private readonly CreateCertificatesClientServerAuth _createCertsService; public DpsRegisterDeviceProvider(IConfiguration config, ILoggerFactory loggerFactory, ImportExportCertificate importExportCertificate, CreateCertificatesClientServerAuth ccs, DpsDbContext dpsDbContext) { Configuration = config; _logger = loggerFactory.CreateLogger<DpsRegisterDeviceProvider>(); _dpsDbContext = dpsDbContext; _iec = importExportCertificate; _createCertsService = ccs; } public async Task<(int? DeviceId, string? ErrorMessage)> RegisterDeviceAsync( string deviceCommonNameDevice, string dpsEnrollmentGroupId) { int? deviceId = null; var scopeId = Configuration["ScopeId"]; var dpsEnrollmentGroup = _dpsDbContext.DpsEnrollmentGroups .FirstOrDefault(t => t.Id == int.Parse(dpsEnrollmentGroupId)); var certDpsEnrollmentGroup = X509Certificate2.CreateFromPem( dpsEnrollmentGroup!.PemPublicKey, dpsEnrollmentGroup.PemPrivateKey); var newDevice = new DpsEnrollmentDevice { Password = GetEncodedRandomString(30), Name = deviceCommonNameDevice.ToLower(), DpsEnrollmentGroupId = dpsEnrollmentGroup.Id, DpsEnrollmentGroup = dpsEnrollmentGroup }; var certDevice = _createCertsService.NewDeviceChainedCertificate( new DistinguishedName { CommonName = $"{newDevice.Name}" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, $"{newDevice.Name}", certDpsEnrollmentGroup); var deviceInPfxBytes = _iec.ExportChainedCertificatePfx(newDevice.Password, certDevice, certDpsEnrollmentGroup); // This is required if you want PFX exports to work. newDevice.PathToPfx = FileProvider.WritePfxToDisk($"{newDevice.Name}.pfx", deviceInPfxBytes); // get the public key certificate for the device newDevice.PemPublicKey = _iec.PemExportPublicKeyCertificate(certDevice); FileProvider.WriteToDisk($"{newDevice.Name}-public.pem", newDevice.PemPublicKey); using (ECDsa? ecdsa = certDevice.GetECDsaPrivateKey()) { newDevice.PemPrivateKey = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{newDevice.Name}-private.pem", newDevice.PemPrivateKey); } // setup Windows store deviceCert var pemExportDevice = _iec.PemExportPfxFullCertificate(certDevice, newDevice.Password); var certDeviceForCreation = _iec.PemImportCertificate(pemExportDevice, newDevice.Password); using (var security = new SecurityProviderX509Certificate(certDeviceForCreation, new X509Certificate2Collection(certDpsEnrollmentGroup))) // To optimize for size, reference only the protocols used by your application. using (var transport = new ProvisioningTransportHandlerAmqp(TransportFallbackType.TcpOnly)) //using (var transport = new ProvisioningTransportHandlerHttp()) //using (var transport = new ProvisioningTransportHandlerMqtt(TransportFallbackType.TcpOnly)) //using (var transport = new ProvisioningTransportHandlerMqtt(TransportFallbackType.WebSocketOnly)) { var client = ProvisioningDeviceClient.Create("global.azure-devices-provisioning.net", scopeId, security, transport); try { var result = await client.RegisterAsync(); _logger.LogInformation("DPS client created {result}", result); } catch (Exception ex) { _logger.LogError("DPS client created {result}", ex.Message); return (null, ex.Message); } } _dpsDbContext.DpsEnrollmentDevices.Add(newDevice); dpsEnrollmentGroup.DpsEnrollmentDevices.Add(newDevice); await _dpsDbContext.SaveChangesAsync(); deviceId = newDevice.Id; return (deviceId, null); } private static string GetEncodedRandomString(int length) { var base64 = Convert.ToBase64String(GenerateRandomBytes(length)); return base64; } private static byte[] GenerateRandomBytes(int length) { var byteArray = new byte[length]; RandomNumberGenerator.Fill(byteArray); return byteArray; } public async Task<List<DpsEnrollmentDevice>> GetDpsDevicesAsync(int? dpsEnrollmentGroupId) { if(dpsEnrollmentGroupId == null) { return await _dpsDbContext.DpsEnrollmentDevices.ToListAsync(); } return await _dpsDbContext.DpsEnrollmentDevices.Where(s => s.DpsEnrollmentGroupId == dpsEnrollmentGroupId).ToListAsync(); } public async Task<DpsEnrollmentDevice?> GetDpsDeviceAsync(int id) { return await _dpsDbContext.DpsEnrollmentDevices .Include(device => device.DpsEnrollmentGroup) .FirstOrDefaultAsync(d => d.Id == id); } } Download certificates and use The private and the public pem files are used to setup the Azure IoT Hub device and send data from the device to the server. A HTML form is used to download the files. The form sends a post request to the file download API. <form action="/api/FileDownload/DpsDevicePublicKeyPem" method="post"> <input type="hidden" value="@Model.DpsDevice.Id" id="Id" name="Id" /> <button type="submit" style="padding-left0" class="btn btn-link">Download Public PEM</button> </form> The DpsDevicePublicKeyPemAsync method implements the file download. The method gets the data from the database and returns this as pem file. [HttpPost("DpsDevicePublicKeyPem")] public async Task<IActionResult> DpsDevicePublicKeyPemAsync([FromForm] int id) { var cert = await _dpsRegisterDeviceProvider .GetDpsDeviceAsync(id); if (cert == null) throw new ArgumentNullException(nameof(cert)); if (cert.PemPublicKey == null) throw new ArgumentNullException(nameof(cert.PemPublicKey)); return File(Encoding.UTF8.GetBytes(cert.PemPublicKey), "application/octet-stream", $"{cert.Name}-public.pem"); } The device UI displays the data and allows the authenticated user to download the files. The CertificateManager and the Microsoft.Azure.Devices.Client Nuget packages are used to implement the IoT Hub device client. The pem files with the public certificate and the private key can be loaded into a X509Certificate instance. This is then used to send the data using the DeviceAuthenticationWithX509Certificate class. The SendEvent method sends the data using the IoT Hub device Message class. var serviceProvider = new ServiceCollection() .AddCertificateManager() .BuildServiceProvider(); var iec = serviceProvider.GetService<ImportExportCertificate>(); #region pem var deviceNamePem = "robot1-feed"; var certPem = File.ReadAllText($"{_pathToCerts}{deviceNamePem}-public.pem"); var eccPem = File.ReadAllText($"{_pathToCerts}{deviceNamePem}-private.pem"); var cert = X509Certificate2.CreateFromPem(certPem, eccPem); // setup deviceCert windows store export var pemDeviceCertPrivate = iec!.PemExportPfxFullCertificate(cert); var certDevice = iec.PemImportCertificate(pemDeviceCertPrivate); #endregion pem var auth = new DeviceAuthenticationWithX509Certificate(deviceNamePem, certDevice); var deviceClient = DeviceClient.Create(iotHubUrl, auth, transportType); if (deviceClient == null) { Console.WriteLine("Failed to create DeviceClient!"); } else { Console.WriteLine("Successfully created DeviceClient!"); SendEvent(deviceClient).Wait(); } Notes Using certificates in .NET and windows is complicated due to how the private keys are handled and loaded. The private keys need to be exported or imported into the stores etc. This is not an easy API to get working and the docs for this are confusing. This type of device transport and the default setup for the device would need to be adapted for your system. In this example, I used ECDsa certificates but you could also use RSA based keys. The root certificate could be replaced with a CA issued one. I created long living certificates because I do not want the devices to stop working in the field. This should be moved to a configuration. A certificate rotation flow would make sense as well. In the follow up articles, I plan to save the events in hot and cold path events and implement device enable, disable flows. I also plan to write about the device twins. The device twins is a excellent way of sharing data in both directions. Links https//github.com/Azure/azure-iot-sdk-csharp https//github.com/damienbod/AspNetCoreCertificates Creating Certificates for X.509 security in Azure IoT Hub using .NET Core https//learn.microsoft.com/en-us/azure/iot-hub/troubleshoot-error-codes https//stackoverflow.com/questions/52750160/what-is-the-rationale-for-all-the-different-x509keystorageflags/52840537#52840537 https//github.com/dotnet/runtime/issues/19581 https//www.nuget.org/packages/CertificateManager Azure IoT Hub Documentation | Microsoft Learn


How to Scaffold or Generate a Custom Identity Logi ...
Category: .Net 7

How to Scaffold or Generate a Custom Identity Login UI in Asp.Net Core 3.1 1. Right Click ...


Views: 960 Likes: 87
Asp.Net Core 3.1 Razor View Error: CS1963: An expr ...
Category: .Net 7

Question How do you resolve Asp.Net Core 3.1 Razor View Error CS1963 An expr ...


Views: 745 Likes: 107
Reset user account passwords using Microsoft Graph and application permissions in ASP.NET Core
Reset user account passwords using Microsoft Graph ...

This article shows how to reset a password for tenant members using a Microsoft Graph application client in ASP.NET Core. An Azure App registration is used to define the application permission for the Microsoft Graph client and the User Administrator role is assigned to the Azure Enterprise application created from the Azure App registration. Code https//github.com/damienbod/azuerad-reset Create an Azure App registration with the Graph permission An Azure App registration was created which requires a secret or a certificate. The Azure App registration has the application User.ReadWrite.All permission and is used to assign the Azure role. This client is only for application clients and not delegated clients. Assign the User Administrator role to the App Registration The User Administrator role is assigned to the Azure App registration (Azure Enterprise application pro tenant). You can do this by using the User Administrator Assignments and and new one can be added. Choose the Azure App registration corresponding Enterprise application and assign the role to be always active. Create the Microsoft Graph application client In the ASP.NET Core application, a new Graph application can be created using the Microsoft Graph SDK and Azure Identity. The GetChainedTokenCredentials is used to authenticate using a managed identity for the production deployment or a user secret in development. You could also use a certificate. This is the managed identity from the Azure App service where the application is deployed in production. using Azure.Identity; using Microsoft.Graph; namespace SelfServiceAzureAdPasswordReset; public class GraphApplicationClientService { private readonly IConfiguration _configuration; private readonly IHostEnvironment _environment; private GraphServiceClient? _graphServiceClient; public GraphApplicationClientService(IConfiguration configuration, IHostEnvironment environment) { _configuration = configuration; _environment = environment; } /// <summary> /// gets a singleton instance of the GraphServiceClient /// </summary> public GraphServiceClient GetGraphClientWithManagedIdentityOrDevClient() { if (_graphServiceClient != null) return _graphServiceClient; string[] scopes = new[] { "https//graph.microsoft.com/.default" }; var chainedTokenCredential = GetChainedTokenCredentials(); _graphServiceClient = new GraphServiceClient(chainedTokenCredential, scopes); return _graphServiceClient; } private ChainedTokenCredential GetChainedTokenCredentials() { if (!_environment.IsDevelopment()) { // You could also use a certificate here return new ChainedTokenCredential(new ManagedIdentityCredential()); } else // dev env { var tenantId = _configuration["AzureAdGraphTenantId"]; var clientId = _configuration.GetValue<string>("AzureAdGraphClientId"); var clientSecret = _configuration.GetValue<string>("AzureAdGraphClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https//docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var devClientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); var chainedTokenCredential = new ChainedTokenCredential(devClientSecretCredential); return chainedTokenCredential; } } } Reset the password Microsoft Graph SDK 4 Once the client is authenticated, Microsoft Graph SDK can be used to implement the logic. You need to decide if SDK 4 or SDK 5 is used to implement the Graph client. Most applications must still use Graph SDK 4 but no docs exist for this anymore. Refer to Stackoverflow or try and error. The application has one method to get the user and a second one to reset the password and force a change on the next authentication. This is ok for low level security, but TAP with a strong authentication should always be used if possible. using Microsoft.Graph; using System.Security.Cryptography; namespace SelfServiceAzureAdPasswordReset; public class UserResetPasswordApplicationGraphSDK4 { private readonly GraphApplicationClientService _graphApplicationClientService; public UserResetPasswordApplicationGraphSDK4(GraphApplicationClientService graphApplicationClientService) { _graphApplicationClientService = graphApplicationClientService; } private async Task<string> GetUserIdAsync(string email) { var filter = $"startswith(userPrincipalName,'{email}')"; var graphServiceClient = _graphApplicationClientService .GetGraphClientWithManagedIdentityOrDevClient(); var users = await graphServiceClient.Users .Request() .Filter(filter) .GetAsync(); return users.CurrentPage[0].Id; } public async Task<string?> ResetPassword(string email) { var graphServiceClient = _graphApplicationClientService .GetGraphClientWithManagedIdentityOrDevClient(); var userId = await GetUserIdAsync(email); if (userId == null) { throw new ArgumentNullException(nameof(email)); } var password = GetRandomString(); await graphServiceClient.Users[userId].Request() .UpdateAsync(new User { PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = true } }); return password; } private static string GetRandomString() { var random = $"{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}-AC"; return random; } private static int GenerateRandom() { return RandomNumberGenerator.GetInt32(100000000, int.MaxValue); } } Reset the password Microsoft Graph SDK 5 Microsoft Graph SDK 5 can also be used to implement the logic to reset the password and force a change on the next signin. using Microsoft.Graph; using Microsoft.Graph.Models; using System.Security.Cryptography; namespace SelfServiceAzureAdPasswordReset; public class UserResetPasswordApplicationGraphSDK5 { private readonly GraphApplicationClientService _graphApplicationClientService; public UserResetPasswordApplicationGraphSDK5(GraphApplicationClientService graphApplicationClientService) { _graphApplicationClientService = graphApplicationClientService; } private async Task<string?> GetUserIdAsync(string email) { var filter = $"startswith(userPrincipalName,'{email}')"; var graphServiceClient = _graphApplicationClientService .GetGraphClientWithManagedIdentityOrDevClient(); var result = await graphServiceClient.Users.GetAsync((requestConfiguration) => { requestConfiguration.QueryParameters.Top = 10; if (!string.IsNullOrEmpty(email)) { requestConfiguration.QueryParameters.Search = $"\"userPrincipalName{email}\""; } requestConfiguration.QueryParameters.Orderby = new string[] { "displayName" }; requestConfiguration.QueryParameters.Count = true; requestConfiguration.QueryParameters.Select = new string[] { "id", "displayName", "userPrincipalName", "userType" }; requestConfiguration.QueryParameters.Filter = "userType eq 'Member'"; // onPremisesSyncEnabled eq false requestConfiguration.Headers.Add("ConsistencyLevel", "eventual"); }); return result!.Value!.FirstOrDefault()!.Id; } public async Task<string?> ResetPassword(string email) { var graphServiceClient = _graphApplicationClientService .GetGraphClientWithManagedIdentityOrDevClient(); var userId = await GetUserIdAsync(email); if (userId == null) { throw new ArgumentNullException(nameof(email)); } var password = GetRandomString(); await graphServiceClient.Users[userId].PatchAsync( new User { PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = true } }); return password; } private static string GetRandomString() { var random = $"{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}-AC"; return random; } private static int GenerateRandom() { return RandomNumberGenerator.GetInt32(100000000, int.MaxValue); } } Any Razor page can use the service and update the password. The Razor Page requires protection to prevent any user or bot updating any other user account. Some type of secret is required to use the service or an extra id which can be created from an internal IT admin. DDOS protection and BOT protection is also required if the Razor page is deployed to a public endpoint and a delay after each request must also be implemented. Extreme caution needs to be taken when exposing this business functionality. private readonly UserResetPasswordApplicationGraphSDK5 _userResetPasswordApp; [BindProperty] public string Upn { get; set; } = string.Empty; [BindProperty] public string? Password { get; set; } = string.Empty; public IndexModel(UserResetPasswordApplicationGraphSDK5 userResetPasswordApplicationGraphSDK4) { _userResetPasswordApp = userResetPasswordApplicationGraphSDK4; } public void OnGet(){} public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } Password = await _userResetPasswordApp .ResetPassword(Upn); return Page(); } The demo application can be started and a password from a local member can be reset. The https//mysignins.microsoft.com/security-info url can be used to test the new password and add MFA or whatever. Notes You can use this solution for applications with no user. If using an administrator or a user to reset the passwords, then a delegated permission should be used with different Graph SDK methods and different Graph permissions. Links https//aka.ms/mysecurityinfo https//learn.microsoft.com/en-us/graph/api/overview?view=graph-rest-1.0 https//learn.microsoft.com/en-us/graph/sdks/paging?tabs=csharp https//learn.microsoft.com/en-us/graph/api/authenticationmethod-resetpassword?view=graph-rest-1.0&tabs=csharp


MicroServerlessLamba.NET Series – Intro: A Scalable Server-less Architecture in AWS Lambda with ASP.NET Core Microservices
MicroServerlessLamba.NET Series – Intro A Scalabl ...

Over the last 18 months I have been working with our team to develop a scalable microservice architecture in AWS Lambda.  Starting with very little experience in AWS prior to this undertaking and now feeling MUCH more comfortable with it, I wanted to summarize our journey and share it with others who may find it useful.  I would also love to hear and learn from those of you who may have feedback on our approach. First, why server-less? While containers seem to be all the rage in the software industry (and I do appreciate their utility), I do not believe they are the silver bullet many present them to be.  In our situation, we were building a greenfield solution with no dependencies to shackle us requiring a hosting mechanism that allowed for OS-level customizations or installs.  Server-less offerings were also a new cloud trend in the industry which almost everyone was using in one way or another so we definitely knew we would utilize it as well.  The more we researched it, the more we thought we could use it for everything with little to no compromise. Every adopted technology has a cost in learning and developing an expertise.  This is especially apparent with a small team.  We wanted to move fast and were looking for every opportunity to save time.  The time we saved NOT learning Docker/Kubernetes/containers/orchestration definitely helped us deliver a working solution faster. I have no doubt we will eventually leverage containers for certain use cases but we will continue to defer until we find a case where their value outweighs the costs. .NET Core The vast majority of our 10 years of code was developed in .NET 4.5 with a brief and unsuccessful detour in NodeJS.  Leadership wanted a new scalable API to extend our success with smaller customer to larger customers through API integrations with their software systems. At the time, .NET Core was just starting to gain some traction offering cross-platform execution, better performance and improved APIs, all developed in open source.  These features coupled with the easier transition from full .NET framework made it the clear choice.  We settled on using .NET Core 2.1 in Linux as our base for the architecture. Cloud Selection – Azure vs AWS We started our new development effort with a 3-4 week evaluation in both Azure and AWS.  We leveraged architect teams from each vendor to expedite our POCs and leverage their expertise to further develop our designs. We successfully developed a simplified version of our planned architecture in each cloud and evaluated the experience. This was a really valuable step in our journey.  Thankfully it was fairly easy to setup time architects for both vendors by working with our cloud representatives.  Both also provided access to their product teams for deeper questions and assistance. You should absolutely leverage these services if you are in a similar situation. In the end we decided to use AWS for a few reasons.  First our team had more AWS experience overall and were more comfortable with it.  Next Azure didn’t yet support .NET Core 2.1 and wouldn’t for another 4-6 months.  And lastly, our legacy products were already in AWS. Overall, both offered very similar capabilities/experiences and either could be an excellent choice for building a new server-less architecture. Summary In closing, this was the process we used in choosing the core platforms to build our new architecture upon.  I plan on writing a series of posts to continue sharing this journey and how we brought these platforms together. If you have feedback on this post or questions that you would like covered in future posts please leave a comment below. Cheers


The framework 'Microsoft.AspNetCore.App' version'3 ...
Category: Docker

Question How do you resolve the Error The framework 'Microsoft.AspNetC ...


Views: 1448 Likes: 146
Microsoft.Common.CurrentVersion.targets(4678,5): e ...
Category: .Net 7

Question How do you resolve the error "Severity Code De ...


Views: 0 Likes: 34
Solved: docker-compose build asp.net core 3.0 erro ...
Category: Docker

Problem The .NET Core frameworks can be found at <pre class="language-mar ...


Views: 2666 Likes: 150
ASP.Net Core 2.2 MVC Error g.cshtml.cs: The type o ...
Category: HTML5

ASP.Net Core 2.2 MVC Err ...


Views: 5443 Likes: 138
MSBUILD : error MSB1011: Specify which project or ...
Category: Technology

Question How do you resolve the issue MSBUILD error MSB1011 Specify which p ...


Views: 2809 Likes: 102
How to Run DotNet Core 3.0 in Watch Mode without i ...
Category: .Net 7

Run DotNet Core 3.0 in Watch Mode without installing any&nbsp;extensions ...


Views: 673 Likes: 95
Microsoft.WebTools.Shared.Exceptions.WebToolsExcep ...
Category: .Net 7

Question How do you resolve Visual Studio Error When Publishing <span style="b ...


Views: 5341 Likes: 96
Asp.Net Core 3.1 Error: FromSqlRaw or FromSqlInter ...
Category: SQL

Asp.Net Core 3.1 Error FromSqlRaw or FromSqlInterpolated was called with non-composable SQL and ...


Views: 1806 Likes: 114
Looking into Structured Logging with Serilog in an ASP.Net Core application
Looking into Structured Logging with Serilog in an ...

In this blog post I am going to look into logging and more specifically into structured Logging. There is no need to stress how important logging is in any .Net application. I am going to talk about structured Logging with Serilog and Seq. I am going to create an ASP.Net Core application to demonstrate logging with Serilog. Let me give you a definition what structured logging is before we go any further. Structured logging is a way of logging where instead of writing just plain text data to our log files we actually write meaningful log data by writing properties that are relevant to our application and values for those properties. This is a huge plus because now that we have structured  data we can go on and write queries against. Logging to a text file, we log all sorts of information. We have different levels of information like warnings, errors, critical messages e.t.c. ?n order to get some meaningful information out of it we have to use external tools or use regular expressions. In many cases we have logged data which is written in many log files. As you can understand in order to get the data we want, we need either to merge all those files or run the same regular expressions across all the files. With structured logging we can have different queries, more intelligent queries. We can query and get data for a particular order, we can get error log messages where order value is more thatn 5.000 USD. We could get data regarding failed login attempts. One of the most important logging libraries is the Serilog logging library. When building your application, you can install the Serilog NuGet package and once you have installed this NuGet package you can start writing simple log messages. These could be messages such as information messages or more important error messages, and when you configure Serilog in your application you can decide where you want our log output to be be written to. We could write to rolling text files, so this means a new text file will be created automatically each day or we could write our log messages to a NoSQL data store like RavenDB or Seq.We can configure multiple destinations to be used by the same logger, so when we write a log message it will automatically be written to multiple output stores. When we’re using Serilog there’s a number of different places we can write our application log data to,  such as the console, rolling files or single files.We can also write log data to MongoDB, RavenDB and Seq. I have mentioned Seq a few times so let me explain what Seq is. It runs as a Windows service and it allows us to configure our applications, via Serilog, to log structured messages to it via HTTP or HTTPS. When our applications log structured messages to Seq server it stores  named properties/values so we can query them later. As part of the Seq server Windows service, it also provides a web interface, so we can connect to it via a browser. This browser application allows us to view log messages and write queries against them. You can download Seq here. Follow the instructions of the wizard to install the Seq windows service. When you install it you must see something similar like the picture below. The default port for listening to events in Seq is 5341 but in my case I choose 5343 port.   Before we begin with our hands on example I must dive into the mechanics of Serilog. Besides information messages we can log structured data using Serilog. We can do that by using named properties and values e.g Log.LogInFormation (“Added user {Username}, Team {UserTeam}”, name, team); You can see between the braces here we’ve got this UserName and Team and these are named properties in Serilog. When we define Named Properties we need to give them some values, so after we’ve defined the message containing these named properties, we then specify the actual values for the name and the team. When we embed structured data in our message here, if the underlying sink supports it, these named properties and their associated values will actually be stored in the underlying data store allowing them to be queried. If you want to see all the available sinks currently supported click here. Serilog has a number of different concepts of data types, the first or what it defines as scalar data types. Serilog sees these scalar data types as things that can be reliably stored without needed to alter their representation in any way. The types of scalar data include Boolean, numbers, strings, dates and times, as well as a few other things. In the Boolean category we’ll just have the bool type. In the numbers category we have the basic numbers. In the string category we have our string type and for dates and times we have the DateTime, DateTime Offset, and TimeSpan types. For the others we have Guids, URIs, as well as nullable versions of these other scalar types, so when Serilog sees that we’re using one of these values for a named property it’ll just go ahead and store its basic representation without doing any further processing on the value. As well as scalar values, Serilog will store collections and objects. If the type implements IEnumerable it will go ahead and iterate through the contents of this IEnumerable and store the values. Serilog will also store the contents of a dictionary for us, but with a caveat that the key to the dictionary must be one of the types that Serilog considers to be scalar, so here the key to the dictionary is a string type. Now that we have a good understanding of structured logging and Serilog, we can create a simple ASP.Net Core application. Launch the VS 2015 Studio and create a default ASP.Net Core Web application. Build and run the project. The application out of the box should look something like this   I am going to install some packages now. I will install Serilog, Serilog. Extensions.Logging and Serilog.Sinks.Seq I will use the Package Manager Console to do that. Have a look at the picture below.   We need to do some configurations in the Startup.cs file.   public class Startup { public Startup(IHostingEnvironment env) { var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile("appsettings.json", optional true, reloadOnChange true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional true) .AddEnvironmentVariables(); if (env.IsDevelopment()) { // This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately. builder.AddApplicationInsightsSettings(developerMode true); } Configuration = builder.Build(); Log.Logger = new LoggerConfiguration() .MinimumLevel.Debug() .Enrich.FromLogContext() .WriteTo.Seq("http//localhost5343") .CreateLogger(); } I added the lines in bold in the Startup class. I also need to add a line of code in the Configure method also in the Startup.cs file. public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddConsole(Configuration.GetSection("Logging")); loggerFactory.AddDebug(); loggerFactory.AddSerilog(); app.UseApplicationInsightsRequestTelemetry(); if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } else { app.UseExceptionHandler("/Home/Error"); } app.UseApplicationInsightsExceptionTelemetry(); app.UseStaticFiles(); app.UseMvc(routes => { routes.MapRoute( name "default", template "{controller=Home}/{action=Index}/{id?}"); }); } I added the line in bold to add Serilog to the ILoggerFactory that is passed into the Configure() method. Now if we build and run the application again, in the Seq browser window interface we will see something like the picture below. All the events (all detailed events until the Home page is fully served) are logged with Serilog and we can see them in Seq. We can search through them very easily. Have a look at the picture below.     In order to use Serilog to log our custom events we simply need to start logging from our own controllers. In my sample application so far I have only the Home Controller. In order to start logging from my controller is to create an instance of dependency on type ILogger<T> public class HomeController Controller { readonly ILogger<HomeController> _logger; public HomeController(ILogger<HomeController> logger) { _logger = logger; } public IActionResult Index() { _logger.LogInformation("Hello, world!"); return View(); } public IActionResult About() { ViewData["Message"] = "Your application description page."; return View(); } public IActionResult Contact() { ViewData["Message"] = "Your contact page."; return View(); } public IActionResult Error() { return View(); } }   I have added some lines (in bold). Build and run your application. Switch to the Seq browser window and search for “Hello”. Have a look at the picture below.   We can pass an object as a property that is of type IEnumerable. Serilog will treat this property as a collection. public IActionResult Index() { var team = new[] { "Panathinaikos", "Barcelona", "Liverpool" }; _logger.LogInformation("My fav team is {Team}", team); _logger.LogInformation("Hello, world!"); return View(); } I added some lines in bold below in the Index method of the Home Controller. Build and run the project again. When I switch to the Seq browser window I see something like the picture below.   Let me create a simple class object and log its values. Firstly, create a folder “Models” in your project. Then add the following class in it. public class ContactModel { public string Name { get; set; } public string Surname { get; set; } public string Email { get; set; } } Then inside the Index method of your HomeController.cs add the code (in bold) public IActionResult Index() { //var team = new[] { "Panathinaikos", "Barcelona", "Liverpool" }; //_logger.LogInformation("My fav team is {Team}", team); //_logger.LogInformation("Hello, world!"); var contact = new ContactModel(); contact.Name = "Nick"; contact.Surname = "kantzelis"; contact.Email = "[email protected]"; _logger.LogInformation("Contact Name is{@ConctactModel}", contact.Name); return View(); } Build and run the project again. When I switch to the Seq browser window I see something like the picture below.   There are different logging levels available to us in Serilog. We can use verbose events to log very low-level debugging and internal information about what our code’s doing. Then we have debug level of events, to log events related to the control logic, so which modules or branches of the application are being used, as well as diagnostic information that might help us diagnose faults, but not necessarily as low-level as the verbose events. We can think of the information level events  as the ability to log events that are of some interest to external parties, so these high-level events could be something like a user has just placed an order for a certain value. Next, we have the warning level events. We can use this level to log information about potential problems that may be occurring in the application, so even if the system is not currently experiencing any actual errors, we can use this log level to give us advanced warning about things that could potentially break in the future. Next, we have the error level events and these errors allows us to log information about unexpected failures in the system, so if there’s been an error in the system, but we’re still able to continue running the system, we could use this error level event. If, however, a failure has occurred and it’s actually stopped the system from being able to process any more information, we can use the fatal level event. This level of event allows us to log information about critical system failures. There are various ways you can format your messages as we saw earlier in our examples.Formatting is another great strength of Serilog. You can also log events selectively by using filters. For example we could write something like that Log.Logger = new LoggerConfiguration() .MinimumLevel.Debug() .Enrich.FromLogContext() .WriteTo.Seq("http//localhost5343") .Filter.ByExcluding(Matching.WithProperty<int>("Count", c => C > 30)) .CreateLogger(); In this post we looked into structured logging, Serilog, sinks, Seq, data types supported by Serilog, formatting options of Serilog and filtering capabilities of Serilog. Hope it helps!!!


Asp.Net Core 3.1 MVC 5 Error ViewBag does not Exis ...
Category: .Net 7

Question How do you resolve&nbsp; an Asp.Net Core 3.1 MVC 5 Error "ViewBag does ...


Views: 421 Likes: 90
HTTP Dot Net Core 2.2 Error 500.30 - ANCM In-Proce ...
Category: .Net 7

When developing in .Net Core 2.2 sometimes when there is a configuration error, the debugger fail ...


Views: 2971 Likes: 119
Microsoft.WebTools.Shared.Exceptions.WebToolsExcep ...
Category: Research

When building a web application using Microsoft's Visual Studio, it is not uncommon to encounter ...


Views: 0 Likes: 27
ASP.NET Core authentication using Microsoft Entra External ID for customers (CIAM)
ASP.NET Core authentication using Microsoft Entra ...

This article looks at implementing an ASP.NET Core application which authenticates using Microsoft Entra External ID for customers (CIAM). The ASP.NET Core authentication is implemented using the Microsoft.Identity.Web Nuget package. The client implements the OpenID Connect code flow with PKCE and a confidential client. Code https//github.com/damienbod/EntraExternalIdCiam Microsoft Entra External ID for customers (CIAM) is a new Microsoft product for customer (B2C) identity solutions. This has many changes to the existing Azure AD B2C solution and adopts many of the features from Azure AD. At present, the product is in public preview. App registration setup As with any Azure AD, Azure AD B2C, Azure AD CIAM application, an Azure App registration is created and used to define the authentication client. The ASP.NET core application is a confidential client and must use a secret or a certificate to authenticate the application as well as the user. The client authenticates using an OpenID Connect (OIDC) confidential code flow with PKCE. The implicit flow does not need to be activated. User flow setup In Microsoft Entra External ID for customers (CIAM), the application must be connected to the user flow. In external identities, a new user flow can be created and the application (The Azure app registration) can be added to the user flow. The user flow can be used to define the specific customer authentication requirements. ASP.NET Core application The ASP.NET Core application is implemented using the Microsoft.Identity.Web Nuget package. The recommended flow for trusted applications is the OpenID Connect confidential code flow with PKCE. This is setup using the AddMicrosoftIdentityWebApp method and also the EnableTokenAcquisitionToCallDownstreamApi method. The CIAM client configuration is read using the json EntraExternalID section. services.AddDistributedMemoryCache(); services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp( builder.Configuration .GetSection("EntraExternalID")) .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches(); In the appsettings.json, user secrets or the production setup, the client specific configurations are defined. The settings must match the Azure App registration. The SignUpSignInPolicyId is no longer used compared to Azure AD B2C. // -- using ciamlogin.com -- "EntraExternalID" { "Authority" "https//damienbodciam.ciamlogin.com", "ClientId" "0990af2f-c338-484d-b23d-dfef6c65f522", "CallbackPath" "/signin-oidc", "SignedOutCallbackPath " "/signout-callback-oidc" // "ClientSecret" "--in-user-secrets--" }, Notes I always try to implement user flows for B2C solutions and avoid custom setups as these setups are hard to maintain, expensive to keep updated and hard to migrate when the product is end of life. Setting up a CIAM client in ASP.NET Core works without problems. CIAM offers many more features but is still missing some essential ones. This product is starting to look really good and will be a great improvement on Azure AD B2C when it is feature complete. Strong authentication is missing from Microsoft Entra External ID for customers (CIAM) and this makes it hard to test using my Azure AD users. Hopefully FIDO2 and passkeys will get supported soon. See the following link for the supported authentication methods https//learn.microsoft.com/en-us/azure/active-directory/external-identities/customers/concept-supported-features-customers I also require a standard OpenID Connect identity provider (Code flow confidential client with PKCE support) in most of my customer solution rollouts. This is not is supported at present. With CIAM, new possibilities are also possible for creating single solutions to support both B2B and B2C use cases. Support for Azure security groups and Azure roles in Microsoft Entra External ID for customers (CIAM) is one of the features which makes this possible. Links https//learn.microsoft.com/en-us/azure/active-directory/external-identities/ https//www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-external-id https//www.cloudpartner.fi/?p=14685 https//developer.microsoft.com/en-us/identity/customers https//techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-external-id-public-preview-developer-centric/ba-p/3823766 https//github.com/AzureAD/microsoft-identity-web


How to choose a Framework when making Web app
Category: Computer Programming

Choosing the right framework for develop ...


Views: 0 Likes: 29
Dot Net Core 3.1 Link Tag Helpers not Clickable
Category: .Net 7

Problem When developing in Asp.Net Core 3.1 and the links (Asp.Net Core Tag Hel ...


Views: 434 Likes: 100

Login to Continue, We will bring you back to this content 0



For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Address[email protected]