Activities of "gterdem"

If you are using containerized deployment, you should override the App__HealthCheckUrl to service name like:

App__HealthCheckUrl=http://api-service-name/health-status

Both public web app and the BlazorServer back-office (administration) application are using the same application configuration edpoints to get initial data.

These are Application configuration scripts and application localization scripts which are used for authentication (current user info etc), features, settings, multi-tenancy information and localization information respectively.

These information are cached to improve performance and shared. That's why dedicated caching server (redis) is being used.

Hello,

There is no standard way of handling background jobs when talking about microservices. You can let each microservice to handle their own background jobs. Or create a new microservice (essentially a console application) dedicated to handle all the background jobs.

Advantages and disadvantages are based on your use cases.

Okay, now I understand.

First, for microservice-to-microservice calls, we introduced Integration Services. We strongly recomment using integration services for this kind of usage.

If you want to keep using existing authorized endpoints, you probably have IdentityClient configuration as below:

"IdentityClients": {
    "Default": {
      "GrantType": "client_credentials", 
      "ClientId": "BookStore_OrderService",
      "ClientSecret": "1q2w3e*",
      "Authority": "https://localhost:44322", -> On production this must be the internal service name
      "Scope": "ProductService"
    }
  }

Instead of using https://10.200.40.25:44322 try using docker service name something like http://myauthservice.

The token validation etc should be (will be) done through the internal network.

Please share the related logs of your

  • Application
  • AuthServer

In the MVC/BlazorServer apps, you can see a configuration as below under the OpenIdConnect configuration:

if (Convert.ToBoolean(configuration["AuthServer:IsOnK8s"]))
    {
        context.Services.Configure<OpenIdConnectOptions>("oidc", options =>
        {
            options.MetadataAddress = configuration["AuthServer:MetaAddress"]!.EnsureEndsWith('/') +
                                      ".well-known/openid-configuration";

            var previousOnRedirectToIdentityProvider = options.Events.OnRedirectToIdentityProvider;
            options.Events.OnRedirectToIdentityProvider = async ctx =>
            {
                // Intercept the redirection so the browser navigates to the right URL in your host
                ctx.ProtocolMessage.IssuerAddress = configuration["AuthServer:Authority"]!.EnsureEndsWith('/') + "connect/authorize";

                if (previousOnRedirectToIdentityProvider != null)
                {
                    await previousOnRedirectToIdentityProvider(ctx);
                }
            };
            var previousOnRedirectToIdentityProviderForSignOut = options.Events.OnRedirectToIdentityProviderForSignOut;
            options.Events.OnRedirectToIdentityProviderForSignOut = async ctx =>
            {
                // Intercept the redirection for signout so the browser navigates to the right URL in your host
                ctx.ProtocolMessage.IssuerAddress = configuration["AuthServer:Authority"]!.EnsureEndsWith('/') + "connect/logout";

                if (previousOnRedirectToIdentityProviderForSignOut != null)
                {
                    await previousOnRedirectToIdentityProviderForSignOut(ctx);
                }
            };
        });

You should be setting [AuthServer:IsOnK8s] to true since you are running it on containers; which means you will be logging in through the browser but obtaining/validating the tokens through the internal network. Set the [AuthServerMetaAddress] to real DNS. Set the [AuthServerMetaAddress] to internal docker service address.

And by the way, if you are preparing the sample try to deploy to production cluster not to local one, because the manifests configuration usually are different. It could work local but not in production. We already face such problems.

Thanks & Regards,

Well, I can not deploy to production cluster. But there shouldn't be any differences between the local cluster other then DNS mapping and SSL generation.

We will be publishing eShopOnAbp with the new version soon. But this issue is not related to ABP but the .NET 8 itself. Hence, I would suggest checking SO aswell. There should be others having the same problem already.

It seems you are trying to run your application on HTTPS inside the internal error that causes the problem. Try removing all the 443 exposing and run the application on HTTP. Ingress should handle the internal port HTTPS mapping.

I'll try to create a public sample for .NET 8 local k8s deployment.

Hi, Do these links help to solve your problem?

  • https://blog.baeke.info/2020/12/07/certificates-with-azure-key-vault-and-nginx-ingress-controller/
  • https://blogs.perficient.com/2023/06/28/dealing-with-wildcard-ssl-certificates-on-azure-and-kubernetes/

We have application deployment guide for Azure at https://docs.abp.io/en/commercial/latest/startup-templates/application/azure-deployment/azure-deployment?UI=MVC&DB=EF&Tiered=No but we don't have a step by step guide for microservice template deployment to Azure simply because it is not really related to ABP framework and we don't have enough knowledge.

You can share the log information about the error you come across after deployment when you navigated to your application to diagnose the problem better. Otherwise, a screenshot of an Azure Services (or AWS or Google KS etc) because those platforms are not in our expertise.

Showing 1 to 10 of 861 entries
Made with ❤️ on ABP v8.2.0 Updated on February 19, 2024, 12:29