wiki

Codit Wiki

Données en cours de chargement… merci pour votre patience.

Codit Blog

Posté le vendredi 13 avril 2018 12:25

Tom Kerkhove par Tom Kerkhove

Azure API Management released a new version that changes the OpenAPI interpretation. This article dives into the potential impact on of the consumer experience of your APIs.

Providing clean and well-documented APIs is a must. This allows your consumers to know what capabilities you provide, what they are for and what to expect.

This is where the OpenAPI specification, aka Swagger, comes in and defines how APIs should be defined across the industry, regardless of what technology is underneath it.

Recently, the Azure API Management team started releasing a new version of the product with some new features and some important changes in how they interpret the OpenAPI specification while importing/exporting them.

Before we dive into the changes to OpenAPI interpretation. I'd like to highlight that they've also added the capability to display the id of a specific operation. In the past, you still had to use the old Publisher portal for this but now you can find it via API > Operation > Frontend.

Next to that, as of last Sunday, the old Publisher portal should be fully gone now, except for the analytics part.

OpenAPI Interpretation

The latest version also changes the way OpenAPI specifications are being interpreted and are now fully based on operation as defined by the OpenAPI spec.

Here are the changes in a nutshell:

  • Id of the operation - Operation Id is based on operation.operationId, otherwise it is being generated similar to get-foo
  • Name of the operation - Display name is based on operation.summary, otherwise it will use operation.operationId. If that is not specified, it will generate a name similar to Get - /foo
  • Description of the operation - Description is based on operation.description

I like this change because it makes sense, however, this can be a breaking change in your API documentation depending on how you achieved it in the past.

The reason for this is that before rolling out this change the interpretation was different:

  • Id of the operation was a generated id
  • Name of the operation was based on operation.operationId
  • Description of the operation was based on operation.description and falls back on operation.summary

How I did it in the past

For all the projects I work on I use Swashbuckle because it's very easy to setup, use and ties into the standard XML documentation.

Here is an example of the documentation I provide for my health endpoint for Sello, which I use for demos.

As you notice, everything is right there and via the operation I specify what the operation is called and give a brief summary of what it does and what my consumers can expect as responses.

The OpenAPI specification that is generated will look like this:

Once this is imported into Azure API Management the developer experience was similar to this:

However, this approach is no longer what I'd like to offer to my consumers because if you import it after the new version it looks like this:

How I'm doing it today

Aligning with the latest interpretation was fairly easy to be honest, instead of providing a description what the operation does via summary I started using remarksinstead.

Next to that, I'm now using summary to give the operation a friendly name and assigned a better operationId via SwaggerOperation.

This is how it looks in code:

The new OpenAPI specification is compatible with the recent changes and will look like this:

Once this is imported the developer experience is maintained and looks similar to this:

When you go to the details of the new operation in the Azure portal, you will see that all our information is succesfully imported:

Conclusion

Azure API Management rolled out a change to the OpenAPI interpretation to provide more flexibility so you can define the operation id to use and align with the general specification.

This change is great, but it might have an impact on your current API documentation, similar to what I've experienced. With the above changes, you are good to go and your consumers will not even notice it.

Thanks for reading,

Tom.

Catégories: API Management, Azure
écrit par: Tom Kerkhove

Posté le mardi 10 avril 2018 14:48

Sagar Sharma par Sagar Sharma

Calling on-premise hosted web services from Logic Apps is super easy now. Use the on-premise data gateway and custom connector to meet this integration.

In this article, I will show you how to connect to on-premise hosted HTTP endpoints (which now has public access) from Logic Apps. We will do this by using the Logic Apps on-premise data gateway and Logic Apps custom connector. This feature is recently available by the Logic Apps product team. For the people who had never used on-premise data gateway before, please read my previous blog post “Installing and Configuring on-premise data gateway for Logic Apps” which contains a detailed explanation of the Logic App on-premise data gateway.

Part 1: Deploying a webservice on a local machine

You are already familiar with this part:

  • Open visual studio. Create new project>Web>ASP.NET Web Application 

  • I am doing this in the most classical way. Empty template, no authentication and add>new item>Web Service (ASMX). You can do it in your preferred way REST, MVC Web API or WCF Webservice etc.
  • Write some web method. Again, I am doing it in easiest way so, for example "HelloWorld" with one parameter:

  • Build the web application and deploy it to local IIS.
  • Browse the website and save the full WSDL. You will need this WSDL file in part 2.

Part 2: Creating a custom connector for the webservice

  • Log on to your azure subscription where you have on-premise data gateway registered. Create a resource of type “Logic Apps Custom Connector”.
  • Open a custom connector and click on edit. Choose API Endpoint as SOAP and Call mode as SOAP to REST and then browse to upload WSDL file of your on-premise webservice.
     

 Please note that if you are trying to access an REST/Swagger/OpenAPI web service, you will need to choose REST as API endpoint.

  • Don’t forget to select “Connect via on premise data gateway”

     
  • Click on continue to go to security tab from general tab

     
  • Again, click on continue as no authentication for this demo
  • In the definition tab, fill some summary and description. Keep default value of rest of the configuration and click on “Update connecter” from top right of the screen.
     
  • The custom connector is ready to use now.

Part 3: Integrating with an on-premise webservice from Logic Apps

  • Create a new Logic App. Start with Recurrence trigger.
  • Add an action and search for your custom connector:
     
  • Choose your web method as action. Then choose your on-premise gateway which you want to use to connect with your on-premise web service and click on create
     
  • Enter the value for your name parameter and with that your final logic app should look like the following:
     
  • Click on save and then run it. Within a moment you should see response from your on-premise web service based on your input parameter
     

Thanks for reading! I hope you've found this article useful. If you have any questions around this or looking for additional information, feel free to comment below.

Want to discover more? Check out these sites:

Posté le lundi 26 mars 2018 15:18

Sagar Sharma par Sagar Sharma

If you want to connect to your on-premise data sources from Azure hosted Logic Apps, then you can use an on-premise data gateway. Let's see how to install and configure it.

Logic App is a new generation integration platform available in Azure. Being a serverless technology, there is no upfront hardware and licensing cost. That leads to a faster time to market. Because of all these features, Logic App is picking up pace in the Integration world.

Because Logic Apps are hosted in cloud, it’s not straight forward to access on-premise network hosted data sources from Logic Apps. To overcome that limitation, Microsoft introduced “on-premise data gateway”. 

The gateway acts as a bridge that provides quick data transfer and encryption between data sources on-premises and your Logic Apps. All traffic originates as secure outbound traffic from the gateway agent to Logic Apps through Azure Service Bus Relay in background.

Currently, the gateway supports connections to the following data sources hosted on-premises:

  • BizTalk Server 2016
  • PostgreSQL
  • DB2
  • SAP Application Server
  • File System
  • SharePoint
  • Informix
  • SQL Server
  • MQ
  • Teradata
  • Oracle Database
  • SAP Message Server

 

Part 1: How does the Logic App on-premise data gateway works?

  1. The gateway cloud service creates a query, along with the encrypted credentials for the data source, and sends the query to the queue for the gateway to process.
  2. The gateway cloud service analyzes the query and pushes the request to the Azure Service Bus.
  3. The on-premises data gateway polls the Azure Service Bus for pending requests.
  4. The gateway gets the query, decrypts the credentials, and connects to the data source with those credentials.
  5. The gateway sends the query to the data source for execution.
  6. The results are sent from the data source, back to the gateway, and then to the gateway cloud service. The gateway cloud service then uses the results.

Part 2: How to install the on-premises data gateway?

Before we install on-premise data gateway, it’s very important to take following points into consideration:

  1. Download and run the gateway installer on a local computer. Link: http://go.microsoft.com/fwlink/?LinkID=820931&clcid=0x409

  2. Review and accept the terms of use and privacy statement. Specify the path on your local computer where you want to install the gateway.



  3. When prompted, sign in with your Azure work or school account, not a Microsoft account.



  4. Now register your installed gateway with the gateway cloud service. Choose "Register a new gateway on this computer". Provide a name for your gateway installation. Create a recovery key, then confirm your recovery key.


    In order to achieve high availability, you can also configure the gateway in cluster mode. For that select “Add to an existing gateway cluster”. 

    To change the default region for the gateway cloud service and Azure Service Bus used by your gateway installation, choose “Change Region”. For example, you might select the same region as your logic app, or select the region closest to your on-premises data source so you can reduce latency. Your gateway resource and logic app can have different locations.

  5. Click on Configure and your gateway installation should be ready. Now we need to register this on-premise installation in Azure. For that log-on to your Azure subscription. Make sure you use the azure subscription which is associated with your work/school tenant. Create new resource of type “On-premises data gateway”



  6. Enter some name of your gateway. Choose subscription and resource group. Make sure you choose the same location as you selected during the gateway installation. You should be able to see the name of your gateway installation after choosing the same location



  7. Click on create and within a moment you will be able to use the on-premise gateway in your Logic App.

  8. You will be able to choose on-premise gateway installation to access on-premise hosted data sources in supported connectors.
    For example:
    • File System
    • SQL Server

Some important things to keep in mind

  • The on-premise data gateway is firewall friendly. There are no inbound connections to the gateway from the Logic Apps. The gateway always uses outbound connections.
  • Logic App on-premise data gateway also supports High availability via Cluster configuration. You can have more than one installation of gateway and configure them in cluster mode.
  • When you install the gateway on one machine, it can connect to all hosts with in that network. So there is no need to install a gateway on each data source machine rather one in each network.
  • Install the on-premises data gateway only on a local computer. You can't install the gateway on a domain controller.
  • Don't install the gateway on a computer that turns off, goes to sleep, or doesn't connect to the Internet because the gateway can't run under those circumstances. Also, the gateway performance might suffer over a wireless network.
  • During installation, you must sign in with a work or school account that's managed by Azure Active Directory (Azure AD), not a Microsoft account.

You can find all official limitations around logic apps at https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config

Configure a firewall or proxy

  • The gateway creates an outbound connection to Azure Service Bus Relay. To provide proxy information for your gateway, see Configure proxy settings.
  • To check whether your firewall, or proxy, might block connections, confirm whether your machine can actually connect to the internet and the Azure Service Bus. From a PowerShell prompt, run this command-

 Test-NetConnection -ComputerName watchdog.servicebus.windows.net -Port 9350

    • This command only tests network connectivity and connectivity to the Azure Service Bus. So, the command doesn't have anything to do with the gateway or the gateway cloud service that encrypts and stores your credentials and gateway details.
    • Also, this command is only available on Windows Server 2012 R2 or later, and Windows 8.1 or later. On earlier OS versions, you can use Telnet to test connectivity. Learn more about Azure Service Bus and hybrid solutions.
    • If TcpTestSucceeded is not set to True, you might be blocked by a firewall. If you want to be comprehensive, substitute the ComputerName and Port values with the values listed under Configure ports in this article.
  • The firewall might also block connections that the Azure Service Bus makes to the Azure datacenters. If this scenario happens, approve (unblock) all the IP addresses for those datacenters in your region. For those IP addresses, get the Azure IP addresses list here.

Configure ports

  • The gateway creates an outbound connection to Azure Service Bus and communicates on outbound ports: TCP 443 (default), 5671, 5672, 9350 through 9354. The gateway doesn't require inbound ports.

Domain names

Outbound ports

Description

*.analysis.windows.net

443

HTTPS

*.login.windows.net

443

HTTPS

*.servicebus.windows.net

5671-5672

Advanced Message Queuing Protocol (AMQP)

*.servicebus.windows.net

443, 9350-9354

Listeners on Service Bus Relay over TCP (requires 443 for Access Control token acquisition)

*.frontend.clouddatahub.net

443

HTTPS

*.core.windows.net

443

HTTPS

login.microsoftonline.com

443

HTTPS

*.msftncsi.com

443

Used to test internet connectivity when the gateway is unreachable by the Power BI service.

  • If you must approve IP addresses instead of the domains, you can download and use the Microsoft Azure Datacenter IP ranges list. In some cases, the Azure Service Bus connections are made with IP Address rather than fully qualified domain names.

Want to read more about on-premise data gateways?

Check out the sites below:

Thanks for reading!

P.S.: In the last couple of months, I have extensively worked on Logic Apps and on-premise data gateway. So, feel free to contact me if you have any questions.

Catégories: Azure
Tags: Azure, Logic Apps
écrit par: Sagar Sharma

Posté le lundi 12 mars 2018 18:12

Tom Kerkhove par Tom Kerkhove

Sam Vanhoutte par Sam Vanhoutte

Jan Tilburgh par Jan Tilburgh

Recently a few colleagues of us participated in The Barrel Challenge, a 3,500-kilometer relay tour across Europe, riding in a 30-years-old connected car.

We wouldn't be Codit if we didn't integrate the car with some sensors to track their data in real-time. The installed sensors in the classic Ford Escort car collected data on noise, temperature, speed, humidity and location. This was based on the telemetry that the device in the car was sending to Microsoft Azure via our Nebulus IoT Gateway.

Missed everything about the rally? Don't worry - Find more info here or read about it on the Microsoft Internet of Things blog.

Connecting our car to the cloud

The car was connected with the sensors, using the GrovePI starter kit, to a Rapsberry PI 3, running on Windows 10 IoT Core. This starter kit contains a set of sensors and actuators that can easily be read, leveraging the .NET SDK that is provided by the Grove team.

In order to connect the device with the Azure backend, Nebulus™ IoT Gateway was used.  This is our own software gateway that can be centrally managed, configured and monitored. The gateway is built in .NET core and can run in Azure IoT Edge.

The GPS signals were read from the Serial port, while the other sensors (temperature, sound, humidity…) were read, using the GPIO pins through the GrovePi SDK.

The gateway was configured, using buffering (as connectivity was not always guaranteed in tunnels or rural areas), so that all data was transmitted on reconnect.

Connectivity happened through 4G, used by a Mi-Fi device.

Real-time route & position

The most important part of the whole project: having a real-time map to see the current position of the car and show sensor data.

There aren't many options to handle this, you can go low level websockets or use something like socket.io, but we chose to use SignalR  given we are most familiar with the Microsoft stack.

The setup is fairly easy - You add NuGet packages, set up a hub class and implement the client library. We decided to go for the latest version which runs on .NET core. But the best thing about this new version is that there's a Typescript library and yes it does work with Angular 5 ! To connect SignalR to our application we wrapped it in a service which we gave the name "TrackerService".

Now all this data also had to be managed on the client, so this part is done with Ngrx, this is a redux clone for Angular but it has RxJs support! What this means is that the components don't directly get data from the TrackerService nor does the service push any data to the components. Actually the TrackerService just dispatches an action with the payload received from SignalR, the action is then handled by a reducer, which updates the state. The components subscribe to the state and receive all the changes. The advantage of this is that you switch to `OnPush` change detection in all of the components which results in a performance boost.

The map

For the map we initially looked at Azure Location Based Services, but it currently doesn't support the features we needed such as custom markers , at least not when we started with the project. This made us choose for Leaflet  which is free and has a lot of interesting features. First of all it was very easy to show the total route by just passing in an array of gps coordinates into a polyLine function. The best part of Leaflet was that it was super easy to calculate the total distance of a route. Just reduce the gps array list and call the distanceTo-method using previous and current coordinates and you'll get an estimated distance. No need to call an extra API! 

Updating Leaflet data is just a matter of subscribing to the NgRx store and appending the real-time data to the current `poliyLine` and updating the position of the car marker.

Creating aggregates in near-real-time

In order to visualize how our team was doing we decided to create aggregates for every 15 minutes, hour for a variety of metrics like speed and altitude. We based these aggregates on the device telemetry that was sent to Azure IoT Hubs. Since we were already using Routes, we added a new one to that and included all events that can be consumed by our aggregation layer.

To perform these aggragates it was a no-brainer to go with Azure Stream Analytics given it can handle the ingestion throughput and it natively support aggregates by using Windowing, more specifically a Tumbling Window.

By using named temporal result sets we were able to capture the aggregate results in a result set and output it to the sinks that are required. This allows us to keep our script simple, but still output the same results without duplicating the business logic.

And that's how we've built the whole scenario - Here are all the components we used in a high-level overview:

 

Want to have a look? You can find all our code on GitHub.

Thanks for reading,

Jan, Sam & Tom

Posté le mercredi 7 mars 2018 18:36

Toon Vanhoutte par Toon Vanhoutte

Recently, I discovered a new tab for Logic Apps resources in the Azure portal, named Workflow Settings. Workflow settings is a very generic name, but it's good to know that it includes additional access control configuration, through inbound IP restrictions. There are two types of restrictions possible: on the runtime and on the run history. Let's have a closer look!

Runtime restrictions

You can configure IP restrictions to your Logic Apps triggers:

  • Any IP: the default setting that does not provide any additional security
  • Only other Logic Apps: this should be the default setting for Logic Apps that are used as reusable components
  • Specific IP ranges: this should be configured for externally exposed Logic Apps, if possible

When trying to access the Logic App trigger from an unauthorized IP address, you get a 401 Unauthorized.

"The client IP address 'XXX.XXX.XXX.XXX' is not in the allowed caller IP address ranges specified in the workflow access control configuration."

Run history restrictions

You can also restrict calls to the run history inputs and outputs. When there are no IP addresses provided, there's no restriction. From the moment you provide IP ranges, it behaves as a whitelist of allowed addresses.

When trying to access the Logic App run details from an unauthorized IP address, you can still see the visual representation of the Logic App run. However you're not able to consult the further details.

 

Conclusion

Another small, but handy security improvement to Logic Apps. It's important to be aware of these capabilities and to apply them wisely.

Cheers!
Toon

Catégories: Azure
écrit par: Toon Vanhoutte