wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, May 13, 2015 7:28 PM

Maxim Braekman by Maxim Braekman

Sam Neirinck by Sam Neirinck

Tom Kerkhove by Tom Kerkhove

The second edition of Techorama, which is being hosted at Utopolis Mechelen, provided a large range of interesting sessions covering all kind of topics. Read more about some of the sessions from the second day in this post.

Just as promised in yesterday’s post of day 1, we are back with an overview of some of the sessions from the second day of Techorama.

Internet of things, success or failure by Stefan Daugaard Poulsen

Twitter: @cyberzeddk

One of the sessions to start off the second day of Techorama, was one about the internet of things, presented by Stefan. He made it clear from the very beginning, that this was not going to be a technical session about writing the code to run on devices, nor about the electronics themselves since Stefan, to put it in his own words, knows jack about it.

Companies are continuously attempting to invent new devices for all kinds of purposes, but are all of these devices actually useful? It’s not all about inventing shiny new devices that look good, but they should take some aspects into account:

  • Does it solve a problem? Can it be used to actually make life easier or provide useful information.
  • Is it consumer-friendly? In other words, can it be shipped without user-manual without raising questions.
  • Does it repeat history? There is no use in re-creating devices that clearly failed in the past.

Off course, one could ask a whole bunch of other questions before starting the development or creating a kickstarter-project. But these questions above are vital in order to build a device, which might turn into a succes.

Although the Internet of Things is becoming widely popular and lots of companies are jumping onto the IoT-train, there are quite some challenges:

  • Privacy: what happens with the data that is being collected by this device.
  • Security: since most devices will be connected to a network, they may not become the culprit of security-leaks.
  • Data processing: all of the sensors are generating a huge load of data, which needs to be processed in an orderly way.
  • Data storage: all of this data that is being processed needs to be stored in a correct way. Do you actually need of the data? How long do you need to save it?
  • Futuristic thinking: the devices should be an enhancement of the current world, but with some limitations. It is not always possible to change how everything is currently working, without expensive modifications.
  • Battery-life: there is no use in creating a device that needs to be charged every couple of hours.

In overall, people or companies should think before creating the next new thing, as it needs to be useful, non-intrusive, reliable and enhancing.

Embellishing APIs with Code Analyzers by Justin Rusbatch

Twitter: @jrusbatch

Visual Studio 2015 ships with the long-awaited Roslyn compiler platform. I can’t remember when exactly Microsoft started talking about Compiler as a Service, but it’s been a couple of years at least. However, it was worth the wait!

As is more and more common within Microsoft, the development of this platform is all open on Github. This means the compiler is no longer a black box, but a fully featured set of APIs which can be used to analyze code, among many other things.

Justin did a small demo on how easy it was to create and debug an Analyzer using Visual Studio 2015 and the VS2015 SDK. It was a simple demo analyzer which would indicate that class names must be in upper case (I suggest not to use this in your actual code).

A code analyzer looks like this in Visual Studio: 
enter image description here

I can think of quite a few use cases already to use code analyzers for. If we think about BizTalk development alone, one can imagine quite a few rules to create, just for Pipeline Components.

  • The pipeline component class must have a GuidAttribute and ComponentCategoryAttribute. (this prevents a few minutes of wondering why your pipeline component doesn’t show up in the Toolbox).
  • Do in-depth code analysis to see if the Load&Save methods are implemented correctly.
  • Create warnings for working with non-streaming classes.

Additionally, each integration project has business-specific rules and coding/naming guidelines. Perhaps your guidelines require you to do a LogStartMethod() & LogEndMethod() in each and every method. Now you can create an analyzer which can verify this, and optionally break your build. This way you can ensure that all your guidelines are enforced, and you have great Visual Studio tooling as an additional benefit. You can even create quick fixes so it’s just a matter of clicking the light bulb and the log statements are inserted without you typing a thing.

All in all, it’s something I will definitely look into in the coming weeks.

Teamwork - Playing Well With Others by Mike Wood

Twitter: @mikewo

Those who've read yesterdays post already know I'm a fan of Mike as a speaker but todays session was really really inspiring!

The focus of the talk was about how can you as an individual work well with others - The first step to achieve this is by stop thinking about yourself as an individual, but instead think the team. Together you are one and believe in one big picture - Your main objective. You need to get rid of your ego and work together as a team to achieve your goal and get across the hurdles that are holding your back from achieving it.

Here are some other interesting tips he gave us :

  • When communicating in team, do it wisely - Don't start pointing fingers at each other when things fail but work together to fix it as soon as possible. Talk in the we form when it's possitive, otherwise talk in the I form, i.e. I broke the build because of this or that. Avoid the "Lottery"-effect where only one person has knowledge about a certain topic, losing him/her means losing a lot of knowledge.

  • Great work can be rewarded with incentives but do it the right way - Reward the team instead of one individual. As an example don't reward the salesman who sold the most, reward the team when they've reached a certain target. This will boot their team spirit instead of having internal competition.

  • Understand failure and accept it - Nobody is perfect & everybody makes mistakes so accept this, it's inevitable to make mistakes but make sure you learn from them.

  • Leadership - Not everyone wants to be a leader so don't push people to do this. A true leader knows how his team members work & feel so they can take that into account. Provide them with guidance & your vision to which you are striving. Also delegation is key to success but don't assign them tasks you would not want to do on your own.

  • Invest in your team members - Have trust in them and let them research things they're interested in

These are just some of the examples Mike gave us that can really contribute in thinking as a team, working as a team & shipping great solutions as a team.

I'd like to end this blog post with a quote Mike mentioned during his talk.

"We will accomplish what we do together. We share our successes & we never let anyone of us fail alone."
- USC Covenant
 

 

This rounds up our 2 day adventure at Techorama, first of all we want to thank everybody for reading our two blog posts and off course a big thank you to the organization of Techorama for creating such an amazing event!! 

Thanks for reading,

Maxim, Sam & Tom

Posted on Tuesday, May 12, 2015 3:44 PM

Maxim Braekman by Maxim Braekman

Sam Neirinck by Sam Neirinck

Tom Kerkhove by Tom Kerkhove

The second edition of Techorama, which is being hosted at Utopolis Mechelen, provided a large range of interesting sessions covering all kind of topics. Read more about some of the sessions from the first day in this post.

The second edition of Techorama again promises to be an interesting event, grouping experts in all kind of technologies to share their knowledge and experiences. Split over two days, there are over 70 sessions. A short summary on some of the sessions of this first day can be found below.

Keynote by Hadi Hariri

The honor of kicking off the entire event went to Hadi Hariri, who got to give an inspiring presentation about the constant chase of developers and tech companies for the mythical "silver bullet". In other words, developers keep looking for the ultimate framework that allows them to build any kind of great application. Because of this constant chase, new frameworks keep popping up and people have been moving from framework to framework, only to discover that this brand-new technology is not perfect as well. Since every framework will have its limitations, the goal to find this silver bullet, remains unreachable.

But with any type of project, the most important idea to keep in mind is to think before you act. Don't just start developing your apps using the newest technology, but consider what would be the best choice for your specific situation.

Keeping in mind that several types of frameworks, technologies and tools are to be subject of several sessions, this keynote started the event off in a very fitting way.

SELECT VALUE FROM DATASTREAM by Alan Smith

For those who were present at the first edition of Techorama, you'll notice a first familiar face. Alan Smith is back, this time giving an insight in the usage of Azure Streaming Analytics by collecting the telemetry data from…, yes, the racing game, he loves using at any demo :)

By sending all of the telemetry data to an event hub, Alan was able to process this data through Streaming analytics to get the average speed, gear, best lap time,… but also to figure out if anyone is cheating. Streaming analytics makes it possible to query the data in any sort of way, allowing you to look for strange/abnormal values, therefore finding cheaters.

As Sam Vanhoutte already gave an extensive description of streaming analytics in this blog post, I will not be diving into this subject, but the demo given by Alan made sure that all of the possibilities were very well illustrated. 

In overall, yet again an interesting and entertaining presentation.

Messaging patterns by Mike Wood

After his talk last year I was looking forward to see Mike Wood in action again! He gave a good session on using a messaging approach in your project and what problems it can fix for you but also what the downsides are.

During the session Mike walked us through some of the concept & patterns used in messaging.
Here are some examples of those discussed :

  • Handle your Poison messages, you don't want to waste resources by trying to process these messages and block everything. It's a best practice to send them to a seperate queue, i.e. a dead-letter queue, so you can keep on processing the other messages.
  • Support Test messages when possible. This allows you to test a specific behavior on a production system while not changing the live-data.
  • Trace your messages so you can visualize the flow of a message. This can help you determine what happens to your messages and where the culprit is when it was lost.
  • Don't lose your messages! By tracing your messages you can follow the trail which a message follows. When using Service Bus Topics, it's possible that the topic swallows your message if there is no matching subscription. One option to handle this is to create a catch-all-subscription.
  • Provide functional transparency in a way that you know what the average processing time is for a specific action so you can pin-point issues and provide alerting on this.
  • Use idempotent processing or provide decompensating logic as an alternative. If your processing is not idempotent you should provide an alternative flow that allows you to rollback the state.

In general, messaging can help you improve the scalability & flexibility by decoupling your solution but this comes with the downside that your complexity increases. Also processing in sequence or using ordering is not easy.

Although I have some experience with messaging it was still a nice session where he give some additional tips on how you can trace your messages better or pinpointing the issues by using the average processing refrences.

Great speaker & great content!

Docker and why it is relevant for developers by Rainer Stropek

One of the benefits of going to a conference is to learn about technologies you would otherwise not pickup easily. It’s also an opportunity to learn about speakers unknown to you. Since I didn’t know Rainer and only had a very (very) high-level knowledge of Docker, this seemed like a good session.

Docker is a platform to facilitate building, shipping and running your application, anywhere. It uses a concept called Container virtualization. This is a level above virtual machines, the container reuses the host operating system. It has the benefit that deployment using Docker is much faster than spinning up a new VM (it can be as little as a second).

What you deploy with Docker is a Docker image. An image is not a monolithic entitity. You can build upon existing images (which can be found on Docker Hub), and only the modifications you do are in your image, the baseimage is referenced in the dockerfile.

Once you setup your Docker image, you can easily deploy it to another environment and be sure it’s setup identically to your Development machine.

All of this and more was covered in Rainer’s session. At the end an ASP.NET 5 application was deployed with Docker, on an Ubuntu machine.

What about Windows one might ask? Docker uses Linux-specific kernel features, which means you’d need to use a lightweight VM to run Docker in a Linux virtual machine. 
However, with the recent announcements of Windows Server Containers and Hyper-V Containers, I think it’ll be very interesting how Microsoft incorporates the container model in both their cloud and on-prem solutions.

The slides of this excellent talk can already be found on his blog.

 

 

That was it for day one, stay tuned for more Techorama action tomorrow!!

 

Thanks for reading,

 

Tom, Sam & Maxim

Posted on Wednesday, April 29, 2015 10:00 PM

Tom Kerkhove by Tom Kerkhove

In this brief blog post I will summarize the extended features to Azure SQL Databases and walk through the new data offerings and give you some pointers for deeper insights.

Today Microsoft announced additional features for Azure SQL Database and two new big data services called Azure Data Lake & Azure SQL Data Warehouse at //BUILD/.

Extending the Azure SQL Database capabilities

Scott Guthrie announced new capabilites for Azure SQL Database going from full text search to creating elastic database pools to encrypting connections with Transparent Data Encryption (TDE) which presumeably uses Azure Key Vault behind the scenes.

Learn more about the extended capabilities here :

  • Build 2015 : Elastic Database Tools (Video)
  • Microsoft Announces Elastic SQL Database Pools For Azure (Article)

SQL Data Warehouse

SQL Data Warehouse allows you to store petabytes of relational data in one place and integrates Machine Learning & Power BI.

While Azure SQL Data Warehouse is released two years after AWS' Redshift Scott compared both services and showed that SQL Data Warehouse now is a leap ahead offering the solution on-prem, is more flexible and comes with full SQL support.

Azure SQL Data Warehouse will become available as public preview in June.

Azure Data Lake

Last but not least is Azure Data Lake, which is in private preview, that allows you to store & manage infinite amount of data and keep it in their original format. This allows you to store your valuable data for ages without losing important segments of it.

Data Lake will be a central storage to perform low latency analytics jobs with enterprise-grade security and integration with other services like Azure Stream Analytics. It is compatible with Hadoops HDFS and Microsofts HDInsights as well as the open-source tooling like Spark & Storm.

Learn more about Azure SQL Data Warehouse & Azure Data Lake here :

  • Microsoft announces Azure SQL Data Warehouse and Azure Data Lake in preview (Article)

  • Microsoft BUILDs its cloud Big Data story (Article)

  • Introduction to the Data Lake-concept (Article)

Current IoT offering in Microsoft Azure

Let's wrap up the day with an nice overview of the current Azure offering in the IoT space.

Let's hope that tomorrows keynote will unveil what the famous Azure IoT Suite includes and what tricks Microsoft has up their sleeves.
In the meanwhile have a look at the IoT breakout sessions today :

  • Internet of Things overview (link)
  • Azure IoT Security (link)
  • Best practices for creating IoT solutions with Azure (link)

All images in this blog posts are property of The Verge & Venturebeat.

Thanks for reading,

Tom.

Categories: Azure IoT
written by: Tom Kerkhove

Posted on Friday, April 17, 2015 5:21 PM

Maxim Braekman by Maxim Braekman

On April 3rd, we had the honor of taking part in the world premier of the IoT Dev Camp organized by Microsoft at their offices in Zaventem, Belgium. Our host of the day was Jan Tielens (@jantielens), who guided us through demo's and labs while using both cloud services and electronics.

In general, it might sound easy to implement a range of devices into a proper interface, but there are a lot of things which have to be taken into account when setting this up. Some of the things you need to keep in mind are device registration, security and keeping connectivity-settings up-to-date.

One of the possibilities to take care of securing the communication between the devices and a cloud service, but preventing you to configure the security every single time, is by using a device-gateway.
This gateway will take care of all the communication and corresponding security, between the devices and cloud service. This allows you to easily add new devices to the interface without adapting the current interface.

 

The goal of this session was to create a solution in which sensors are registering data which is sent and processed by cloud services. Before we could actually start tinkering with devices and sensors ourselves, we got a nice presentation, including some demo's, on how to configure and use Azure App Services, such as Event hubs, streaming analytics and mobile services.

Event hubs

An ideal service to be used for collecting all data coming from several devices are Event Hubs. This enables the collection of event streams at high throughput, from a diverse set of devices and services. As this is a pub-sub ingestion service, this can be used to pass on the data to several other services for further processing or analysis. 

Streaming analytics

Thanks to streaming analytics the data retrieved from the sensors can be analyzed at real-time, showing the most recent, up-to-date information on, for example, a dashboard.

As Sam Vanhoutte already gave an extensive description of streaming analytics in this blog post, I will not be diving into this subject. 

Mobile services

Using Azure Mobile services, you can quickly create a service which can be used to process and show any type of data on a website, mobile device or any other application you could be creating.

This session did not go into the details of creating mobile services with custom methods. This was only used as an example to show that the backend database can be used to store the output when using streaming analytics. In a real-life solution this would allow you to make all of the data, collected from several sensors, publicly available.

Devices

There are several types of devices which can be used as a base to start setting up an IoT-interface. Some of those boards are described in short underneath.

Arduino

The Arduino, which is an open-source device, is probably the most-popular one currently available. The biggest benefit of this device is the large community, which allows you to easily get the required information or samples to get you going.

The down-side of this device, are the low specs. With only 32K of flash memory, it has a limited amount of capabilities. Security-wise, for instance, it is not possible to communicate with services, using the HTTPS-protocol, however it is capable of sending data over HTTP, using an UTP-shield.

More info can be found here: http://arduino.cc/

Netduino

Another device, which is quite similar to the Arduino, is the Netduino, as it is based on the platform of the former. This board has better specs than the Arduino, but is, because of these specs, more power-consuming.

The big difference, however is that it allows you to run the .Net Micro Framework, enabling you to develop using .Net languages.

Then again, the downside of this board is that the community is not as big, meaning you will have to figure out more of the details yourselves.

More info can be found here: http://www.netduino.com/

.Net Gadgeteer

One of the other available development-boards is the Microsoft .Net Gadgeteer, which also enables you to use the .Net Micro Framework and allows you to use a range of "plug-and-play" sensors.

The specs of this device are better than both of the previous boards, meaning it has a lot more capabilities, but then again it does not have a large community helping you out.

More info can be found here: http://www.netmf.com/gadgeteer/

Raspberry Pi

Of all of these boards, the Raspberry Pi is the one with the highest specs, even allowing you to run an operating system on top of the device. Typically, this device is running a version of Linux, but as was announced some time ago, Microsoft will publish a free version of Windows 10 that will be able to run on top of the board!

The huge benefit of this board is the capability of using pretty much any programming language for development, since any required framework can be installed.

However, before you can start using the Raspberry Pi, you will need to obtain a copy of any OS, which has to be 'burned' onto a micro-SD card, that will be acting as the 'hard'-drive of this board.

More info can be found here: http://www.raspberrypi.org/

Closure

Once the presentation and demos were finished, we all got the chance, during a ‘hackathon’, to attempt to set up a fully working flow, starting from sensors and ending up in a cloud service.

On overall, this session gave us a nice overview of the capabilities of IoT in real-life situations.

To round up the session, Microsoft came up with a nice surprise. As this was the world premiere, all of the attendees received a Raspberry Pi 2, preparing us for integrating IoT using Windows 10.

Thank you, Microsoft!

 

 

 

Categories: IoT Azure
written by: Maxim Braekman

Posted on Wednesday, April 15, 2015 12:40 PM

Glenn Colpaert by Glenn Colpaert

Henry Houdmont by Henry Houdmont

Pieter Vandenheede by Pieter Vandenheede

The second day of the London BizTalk Summit 2015 is over and did not dissappoint. In this blog post you can find our view on today's sessions and the conclusion.

Intro

The London BizTalk Summit 2015 is over and we’ve enjoyed every minute of it. Below you can find our view on today’s sessions. Be sure to check our day 1 recap of yesterday in case you missed it: London BizTalk Summit 2015 - Day 1 recap

The second day was more focused on developers - not admins ;-) - and started with a number of short back-to-back sessions.

Don’t hesitate to put questions or remarks in the comments part of this blog.

Hybrid Solutions with the current BizTalk Server 2013 R2 platform

Speaker: Steef-Jan-Wiggers, (https://twitter.com/SteefJan)

Steef-Jan had the honor to kick off the second day of the BizTalk Summit 2015.

In the fast changing world of technology, we - as integration people - are constantly triggered with new challenges and opportunities, this forces us to modernize our look at integration.

The focus of Steef-Jan during his session was to demonstrate how to tackle some of these challenges with BizTalk Server 2013 R2.

He showed us how easy it is to consume REST services from BizTalk and how the BizTalk tools facilitate us to work with JSON files.

 

10x latency improvement – how to squeeze performance out of your BizTalk solution

Speaker: Johan Hedberg (https://twitter.com/johed)

In this session Johan analyzed a real-life case of an overarchitected BizTalk solution that required optimization to improve latency and throughput. I’ve listed some of the improvements Johan touched below:

  • Reduce MessageBox hops by nesting orchestrations
  • Consider levels/layers of reuse by using canonical processes and methods
  • Memory is cheap, so consider caching your data
  • Host Management is important, consider host separation, tweaking throttling settings, tweaking polling interval,…
  • Custom Performance Counters to quickly identify where the bottleneck of your application is
  • Optimize your Orchestration Persistance Points

Be aware, none of these improvements is the real silver bullet. The most important is that you know your solution and you know your requirement and act upon them.

DEV – TEST – TUNE - REPEAT

Johan will do a more detailed session on performance optimizations on Integration Mondays next week! (http://www.integrationusergroup.com/)

BizTalk Server tips and tricks for developers and admins

Speaker: Sandro Pereira (https://twitter.com/sandro_asp)

As BizTalk Developers and BizTalk Admins it’s important to maintain the health of the platform and to have tools and techniques to produce efficient integration solutions.

Sandro took us on a trip around some useful BizTalk Server tips, tricks and workarounds for both developers and administrators. In the meanwhile he took the opportunity to bash the BizTalk administrator Tord Glad Nordahl, a good friend of his, which led to some funny situations during his session.

One of the more interesting tips was a tool Sandro created to clean out the BizTalk MarkLog Tables as BizTalk does not provide an out of the box solution for this. You can download and try this tool on following location: https://gallery.technet.microsoft.com/BizTalk-Server-Cleaning-15a1b070

Due the lack of timing Sandro could not cover every tip/trick so be sure to check out all of this valuable information when the slides come online.

Power BI tool

Speaker: Tord Glad Nordahl (https://twitter.com/tordeman)

A session by a BizTalk admin for BizTalk admins but Tord was mainly trying to convince the developers to make nice graphs using PowerBI instead of looking at pure data: an honorable, but difficult mission.

PowerBI has recently been released globally (GA) for any platform. It’s a cheap tool (10USD per month per user) which allows you to manipulate data in an easy way to show nice dynamic graphs.

You can grab data from many different types of sources (SQL, CSV … ) to make reports. In a BizTalk scenario you could use the tracking database to show the business performance graphs about certain flows running in your BizTalk environment.

Basically it comes to this: grab whatever data you want from any source you want, merge, customize and create nice graphs to keep the business guys happy.

Microservices and the cloud-based future of the integration

Speaker: Charles Young (https://twitter.com/cnayoung)

Next up was Charles Young who gave us a very enthousiastic and passionate session around µServices and the evolution of architecture leading up to the µServices. I felt Charles did a great job of explaining this and looking at the crowd there, he certainly shared his enthousiasm with them. 

Charles talked from an architecture viewpoint and explained why moving from an ESB architecture to a µServices architecture can be a good idea. He explained the move from layered to hexagonal architecture and how µServices picked up on this idea.

With µServices it is important to keep sure the layered architecture is maintained and boundaries are kept so you don't have the pitfall of too tightly coupling your services and applications.

He explained how important it is to standardize the interfaces using SOAP or REST and how a lot of established and upcoming services are built around this principle of API's.

The aspirations of µServices are:

  • Simplicity: chop up complex things and make them easy to reuse.
  • Velocity: allows to speed up development.
  • Evolution: allow quick change by gathering the building blocks to form an application.
  • Democratisation: allowing a mild learning curve and the ability to expose something to the public quickly.

From monolothic design to µServices: services are too chunky and need to be decomposed into finer grained µServices

It is important to organize your services around business capabilities from front-end to back-end, so one builds a nice stack from front-end to back-end. You also need to be able to deploy, host and version the µServices independently and try to use lightweight communication. Keeping it simple and fast.
Avoiding centralized governance and management will for example allow a cross-platform approach.
And as a last item: really try to focus on rapid re-use of the µServices. Which comes back to the first point: fine-grained services accomodates this.

Later on in the session Charles - from a pure architectural standpoint in this session - was talking about the limitations around the Microsoft stack as well, which made sense.

 As a closer, a quote from Charles which made a lot of sense:

"It's not because µservices is the new buzzword that we should leave our brains at the door."

Migrating to microservices

Speaker: Jon Fancey (https://twitter.com/jonfancey) & Dan Probert (https://twitter.com/probertdaniel)

Jon started off with some slides to convince the audience to move their flows from on premise to µservices in the cloud with the following arguments:

  • In the cloud you have more flexibility with the “pay as you go” pricing models and the easy scaling possibilities.
  • iPaas (integration Platform as a service) allows you to have an environment that needs less management and the µServices can be easily updated.

After that, he continued by explaining step-by-step the equivalents between the cloud components and the on premises building blocks:

  • A workflow can be considered as a Logic App
  • Maps are converted into BizTalk API App Transforms. An MS-tool exists to do this conversion but XSLT is also supported or you can host the map as-is using an API app which allows you to use XSLT 2.0!!!
  • The Business Rules Engine becomes a Rules API App with a portal-based designer.
  • Trading Partner Management is not yet covered but MS is looking into developing a tool for this.
  • Pipelines become Logic Apps where the Pipeline Components are converted into API Apps.

Dan Probert then took over to announce their new initiative: The Migration Factory

They created a tool to migrate full on premise applications to µServices cloud solutions!
By exporting the MSI and uploading it to their tool, you’ll get a Logic App and API Apps with similar functionality where adapters become connectors, the message box becomes a Service Bus API App etc...

As they didn’t cover everything yet and probably won’t be able to due to technical limitations, on the website you'll get a report with the parts that will be converted and a to-do list of what you should convert yourself.

BAM does not exist in the cloud and tracking can be done using the infrastructure REST API.

More information can be found here: http://migrationfactoryholding.azurewebsites.net/

 

Azure API Management Part 1 and Part 2

Part 1 Speaker: Kent Weare(https://twitter.com/wearsy)

Part 2 Speaker: Tomasso Groenendijk (https://twitter.com/tlagroenendijk)

The first session in the afternoon was covered by Kent Weare 

He started off explaining the difference between an API and WebAPI. The main difference is WebAPI is about HTTP, is RESTful, uses (preferably) JSON (or XML) and Swagger.

Kent showed us how APIs are on the rise. A lot of new public APIs are coming out each day. Which makes you wonder how much APIs are still internal: it's just the tip of the iceberg. More and more APIs will be coming out due to the growth in mobile applications and services, IoT, big data, etc...

Kent then surprised us with a nice comparison with the concept of a Bouncer or a doorman as an API management solution.
Taking care of:

  • Authentication and authorization (API security)
  • Policy enforcement (Play by the rules)
  • Analytics (Being able to see how much calls were made by who)
  • Development Engagement (Allowing other systems/services to connect to your application/service makes it more usable and integration friendly).
  • Agility (being able to quickly adapt the the business)

Azure API Management started off when the race for API management began within the business. Microsoft acquisitioned APIphany and as such ventured into API management land.

Kent then showed us a nice demo covering things like

  • Creating and provisioning in the Azure portal (not the preview)
  • Defining operations
  • Defining policies
  • Test APIs from console
  • Showing analytics
  • The ease of enabling caching
  • Rate Limiting
  • Security

Next up was Tomasso Groenendijk who prepared a different demo than Kent earlier. The demo revolved around API Management in relation to the BizTalk ESB Toolkit patterns. The latter one being one of his favorite BizTalk tools.

Earlier during the summit, Tomasso asked - via Twitter - for participation from the audience to sign-up your API's via a customized developer portal.

Unfortunately for Tomasso his demos didn't go exactly as planned, which was very unfortunate, because he obviously spent a lot of time preparing this. He touched Azure API Management, Azure websites, Azure SQL Database and even BizTalk360.

He explained us about the agility of using an ESB pattern in combination with Azure API management to quickly expose this ESB as an API. Itinerary based services and routing helps to quickly adapt to changing business need.

Azure of Things

Speaker: Nino Crudele (https://twitter.com/ninocrudele)

Nino kicked-of his session in his known style: crazy and fulll of enthousiasm.
He started with an overview of the history of integration technologies and concluded with the fact that even though technologies have evolved a lot over the years, the most used technology is still FILE as it is simple, flexible, adaptable, serializable, reliable ....

On an architecture level, you have multiple options:

  • peer-to-peer (spaghetti integration)
  • central common transports/connectors layer between endpoints and the integration framework
  • Integration Framework contains Transport/connectors layer and endpoints use proxies of this transport/connectors layer to connect to it.

As Azure contains a lot of technologies, we could talk of an “Azure of Things” where the combined use of all those tools can bring us much more than the sum of the possibilities of each tool.

Following this small presentation, Nino presented a few demos with an architecture he created for event propagation using the Azure Event Hubs which he called JitGate (Just in Time Gate).

His demos demonstrated his framework, which is still in an early stage, but looks very promising and performant. Knowing Nino, he might blow us away during the coming year when he upgrades the framework further!

Conclusion

After two days packed with sessions and new information from talented speakers, we were all pretty tired and anxious to get home. We learned a lot of new things and have enough ideas to keep this blog going for at least another year!

We hope you enjoyed our small recap and are eagerly awaiting your feedback! What did you think we missed in our posts or do you have another point of view? Let us know!