SEEING A BIG PICTURE IS VITAL WHEN YOU BEGIN DEVELOPING SOFTWARE. SO BEFORE YOU START CRAFTING YOUR PRODUCT, TAKE A LOOK AT THE MOST COMMON AND SECURE SOFTWARE ENGINEERING LIFE CYCLE TO GRASP THE CONCEPT OF HOW A COMPLETE PROCESS LOOKS LIKE.

9 out of 10 startups fail because their solution has no market fit. This indicates that the software development analysis stage was skipped or conducted wrong. In such a case all further design, coding, and testing have little sense.

Seeing the big picture is vital when you begin developing software. So before you start crafting your product, take a look at the most common and secure software engineering life cycle to grasp the concept of how a complete process looks like.

1. ANALYSIS

Software Development Life Cycle image 1

Goal: To gather requirements and define the direction of the software engineering process.

Mistakes made during the analysis phase are the most expensive to fix later. This is the only time to build a foundation for your future product:

  • Carry out user research;
  • Analyze the competitors;
  • Establish business goals;
  • Measure success;
  • Map the customer journey;
  • Plan the budget.

It’s important to take into account the team on the client’s side. Its in-depth knowledge about the product goals and requirements, level of industry knowledge, and readiness to collaborate will contribute to the overall strategy and will make the discovery stage more effective. It’s crucial to engage at least one person from the client’s side to get a better understanding of the scope and goals of your product. Such interaction will help to write a detailed Software Requirements Specification (SRS) and to eliminate risks such as understaffing, over budget, and lack of market demand.

Outcome: Setting requirements and writing SRS

2. PRODUCT DESIGN

Software Development Life Cycle image 2

Goal: To convert requirements into detailed software architecture.

Product design can be divided into three fundamental components:

  • Functionality;
  • Appearance;
  • Quality.

Functionality is the priority of an Architect. By narrowing the focus he helps business and technical teams work together on a product that targets both clients’ needs and business goals.

Appearance is a tangible input of UX designers that conduct user research followed by sketching (any kind of it), prototyping, and creating MVP.

Quality stands for meeting customer needs and expectations with a product.

All the above components should be sourced from the SRS document that is a reference point for everyone involved in the project. The vital thing here is a vast understanding of the context for the existence of software engineering services. The more detailed the description of how the application needs to be created would be, the less additional input developers will require at the next coding stage.

Outcome: Software design description.

3. SOFTWARE DEVELOPMENT

Software Development Life Cycle image 3

Goal: Translate the design of the system into code.

This is a lengthy phase though less complicated than the previous two. Using the design description, programmers code the modules using the chosen programming language. The coding tasks are divided between the software engineering team members according to their skillset. Front-end developers create codes for displaying an application or product UI and the elements users need to interplay with the site. Their Back-end counterparts are in charge of the technical side of a product.

Establish conditions for organized and consistent coding:

  • Use proper guidelines;
  • Supervise every developer;
  • Automate deployment process;
  • Nourish the best programming practices.

Well-written code significantly reduces the number of test runs and maintenance-related problems at the next stages. At Erbis we use innovative design and development practices to drive our client’s growth.

Outcome: testable, functional software, and a Source Code Document.

4. PRODUCT TESTING

Software Development Life Cycle image 4

Goal: Code verification and bugs detection.

There is one step missing before mass-production testing. It consists of Quality Assurance (QA), Quality Control (QC), and Testing. While these are technically separate parts of the development process dependent on project size, stakeholders often group them as they have the same end goal: to make a high-quality product.

Let’s figure out the difference:

  • Quality Assurance. This is a process-oriented activity aimed to ensure that the team manages and creates deliverables consistently. The role of QA automation is to identify the reason for the error and to re-engineer the system so that such defects won’t appear any further. Thus focusing efforts on improving QA processes is one of the best investments an organization can make.
    • Quality Control. This is a product-oriented performance related to intermediate and final results of software engineering. Examples of QC include technical reviews, software testing, and code inspections.
      • Testing. The tester creates tests and observes the behavior of the particular program under certain conditions. He fills in the documentation and returns it to developers. The best way to ensure that tests are run regularly is to automate them.

Outcome: Software is completely free of bugs and compliant.

5. DEPLOYMENT

Software Development Life Cycle image 5

Goal: Software delivery to a target device.

Software deployment refers to the process of running a product on a server or device and can be summarized in three general phases:

  • Preparation; 
  • Testing;
  • Deployment.

A piece of software may be deployed several times during the software development lifecycle depending on its functioning and error check results. If it runs smoothly and the way it was intended, then consider your software ready to be launched for beta testing. The support team collects feedback from the first users, and if any bugs come up, the development team fixes them. After that, the final version is rolled out.

Outcome: Fully operational software in a live environment.

6 phases of the software development life cycle

6. MAINTENANCE

Software Development Life Cycle image 6

Goal: Ongoing security monitoring and update.

The process of software development is a never-ending cycle as the plan rarely turns out perfect when it meets reality. In most cases, product maintenance is the continuous phase intended to keep the software stable and up to date. If any new bugs and vulnerabilities appear, the maintenance team will mitigate them or flag them to be addressed in a future version of the software.

There are 2 types of maintenance:

  • Corrective. This means the fixation of defects that are rooted in production. They emerge because removing all the faults before delivery is extremely difficult.
  • Adaptive. It’s an addition of requirements you didn’t have in the original plan. Such modification takes shape due to environment or input data change.

Outcome: Utter user experience and productivity.

CONCLUSION

These steps are roughly the same from one software development life cycle model to another. They tend to occur in this order, though they can also be mixed into a rapidly-repeating cycle (like Agile) or break down into linear sequential phases (like Waterfall). Regardless of the method, the desired result of software engineering services is a competitive and customer-oriented product.

Reference: https://erbis.com/blog/6-phases-of-the-software-development-life-cycle/ 

What is Common Data Service (CDS) and why it is important for you if you use Power Apps or Power BI?

What is Common Data Service?

Common Data Service, abbreviated as CDS, is a data storage service. Like a database. You can use CDS to store data in the form of tables, which is called as Entities. Common Data Service is a service that is used mainly in the Power Apps portal, however, it is accessible through other Power Platform services and Microsoft Dynamics. the data can be loaded into CDS entities through multiple ways, and it can be also extracted from there through different methods. So you can say it this way that; CDS is a data storage and retrieval system, like a database.

Common Data Service (CDS) is a data storage system, like a database.

CDS includes a set of base entities (tables), but you can add custom entities to it. You can access CDS through other Power Platform services (Power BI, Power Apps, Power Automate…) and some other Microsoft services.

Why CDS is important for you if you are using Power Apps?

If you are using Power Apps, then it means you are creating a mobile application. The mobile application is most likely to work with data, capture information from the user through a data entry form or something and it needs to store the data somewhere. You would need to have a database system that you can store your data and retrieve it.

CDS is free storage for you in the Power Apps environment because you are already paying for the Power Apps license, then you can use CDS for free.

Of course, you can go and build your database in other systems, such as Azure SQL database, but then you need to pay for that service separately, or you might prefer to keep it on-prem in a SQL database, which then you would need to set up a gateway to use it. The choice of which database to use for your Power Apps app is up to you. However, CDS will give you a free, and easy-to-use database system to work with, and build your apps using that.

CDS is the free database service that you can use in Power Apps to store and retrieve the data of your apps.

So in a nutshell; CDS stores your Power Apps data at no extra cost, it is easy to manage. You don’t need a database developer to go and build a database for you to load your data into it. It is a data storage system that can be used by a citizen application developer.

You don’t need to know about databases, or be a database developer, to use CDS. It is built for the citizen app developer.

What the CDS database management system looks like?

Like many other database systems, CDS also has a management tool, which you can go and see entities, edit them and manage them. At the moment, you can use Power Apps portal for this management portal, and you will find CDS under the Power Apps portal like below;

 

What is the point of CDS if you are using Power BI?

There are two aspects of using CDS if you use Power BI. One is to use CDS as a data source system.

In the world of Power BI, we don’t store the data. We do, however, get data from a data storage system to analyze it. And that is why most of the people in the Power BI world, might not be familiar with CDS, because, from their point of view, this is just another database system, just another data source to get data from.

CDS is another data source that you can use when you Get Data in Power BI.

Another aspect of using CDS in Power BI is to use it as intermediate storage. Intermediate storage for your Power Query transformations. I have explained previously that why you might need to decouple your Power Query transformation layer into Power BI dataflows. Using CDS, you can store the output of dataflows into CDS, like a database, or let’s say, like a data warehouse, and use it for further analysis.

CDS can be your data warehouse if you use dataflows.

I highly recommend you to reach the article I wrote about decoupling the data transformation layer, data modelling layer, and visualization layer in Power BI implementation, which explains how dataflows can be an essential part of implementation for multi-developer architecture;

And the concept of dataflow is nowadays, not just for Power BI, but also for Power Apps. It is Power Platform Dataflows;

What is the storage engine behind the scene for CDS?

CDS stores the data, retrieve it and controls it using Azure services. There are a number of Azure services that are involved with this. Azure SQL DB, SQL elastic pools used for relational data, Blob storage for the non-relational data, and CosmosDB for logs. the screenshot below from Ryan Jones session at Microsoft Ignite 2019 explains how things placed together:

Ways to Load data into CDS

Because CDS is a storage system, you might ask how you can store data into it? what are ways? here is the answer:

  • Power Apps app. You can build an app using Power Apps that store the data into entities of CDS.
  • Power Apps portal using Get Data and Power Query experience.
  • Dataflow: In Power Apps portal, you can create a dataflow and schedule it to load data into CDS.
  • Other services

Way to retrieve data from CDS

You can extract data from CDS in many different ways, including:

  • Power Apps app; You can have forms in your app that show the existing data from entities of CDS.
  • Export data from Power Apps portal
  • Dataflow
  • Power BI, Get data from Common Data Services
  • Other services

How much does it cost for you?

If you are using Power Apps, then you have a license that covers also CDS, so you don’t need to pay anything extra. However, different licenses have different limitations. here you can find out more about it.

If you just have Power BI license, then at the moment of writing this article, the only way for you to use CDS, is to pay for Power Apps licenses. However, remember, for getting data from CDS, you don’t need the license (because someone already created the CDS and is paying for it). You would need a license if you are writing into CDS (through dataflows maybe, which needs its own blog article, which I’ll explain later in another post).

Summary

Common Data Service (CDS) is a database system. this database system stores the data in Azure data lake storage (cloud), and you can work with it through a management portal. CDS is a free database if you are using Power Apps licenses, You can then analyze the data of CDS using Power BI. CDS can be also used as a data warehouse layer using dataflows. In other blog articles, I’ll explain more about that scenario.

Reza Rad
TRAINER, CONSULTANT, MENTOR
Reza Rad is a Microsoft Regional Director, an Author, Trainer, Speaker and Consultant. He has a BSc in Computer engineering; he has more than 20 years’ experience in data analysis, BI, databases, programming, and development mostly on Microsoft technologies. He is a Microsoft Data Platform MVP for nine continuous years (from 2011 till now) for his dedication in Microsoft BI. Reza is an active blogger and co-founder of RADACAD. Reza is also co-founder and co-organizer of Difinity conference in New Zealand.
His articles on different aspects of technologies, especially on MS BI, can be found on his blog: https://radacad.com/blog.
He wrote some books on MS SQL BI and also is writing some others, He was also an active member on online technical forums such as MSDN and Experts-Exchange, and was a moderator of MSDN SQL Server forums, and is an MCP, MCSE, and MCITP of BI. He is the leader of the New Zealand Business Intelligence users group. He is also the author of very popular book Power BI from Rookie to Rock Star, which is free with more than 1700 pages of content and the Power BI Pro Architecture published by Apress.
He is an International Speaker in Microsoft Ignite, Microsoft Business Applications Summit, Data Insight Summit, PASS Summit, SQL Saturday and SQL user groups. And He is a Microsoft Certified Trainer.
Reza’s passion is to help you find the best data solution, he is Data enthusiast.

What is L1, L2, and L3 Support Engineering?

Harshana Madusanka Jayamaha
 

In this article, I’m going to explain about the Software support engineering role with my experience. I was a Level 2 and 3 support Engineer during my career.

L1 — Level 1
L2 — Level 2
L3 — Level 3
Ticket — Incident

L1 support includes interacting with customers, understand their issue and create tickets against it. The ticket then routed to the relevant L2 support ( Integration support, Server & Storage support, etc …). L1 support Engineers have basic knowledge of product/service and skill to troubleshoot a very basic issue like password reset, software installation/uninstallation/reinstallation.

L2 support manages the tickets which routed to them by L1( L2 support also can create tickets against any issue noticed by them). They have more knowledge, more experience in solving related complex issues and can guide/help L1 support folks job in troubleshooting. If the solution not provided at this level then escalate to the L3.

L3 is the last line of support and usually comprise of a development team which addresses the technical issues. They are expert in their domain and handle the most difficult problems. They do the code changes, research and develop the solution for challenging new or unknown issues.

What is Power Apps?

Power Apps is a suite of apps, services, connectors and data platform that provides a rapid application development environment to build custom apps for your business needs. Using Power Apps, you can quickly build custom business apps that connect to your business data stored either in the underlying data platform (Microsoft Dataverseor in various online and on-premises data sources (SharePoint, Microsoft 365, Dynamics 365, SQL Server, and so on).

Power Apps.

Apps built using Power Apps provide rich business logic and workflow capabilities to transform your manual business processes to digital, automated processes. Further, apps built using Power Apps have a responsive design, and can run seamlessly in browser or on mobile devices (phone or tablet). Power Apps "democratizes" the custom business app building experience by enabling users to build feature-rich, custom business apps without writing code.

Power Apps also provides an extensible platform that lets pro developers programmatically interact with data and metadata, apply business logic, create custom connectors, and integrate with external data.

For more information:

Power Apps for app makers/creators

Using Power Apps, you can create three types of apps: canvasmodel-driven, and portal. More information: Overview of creating apps in Power Apps.

To create an app, you start with make.powerapps.com.

  • Power Apps Studio is the app designer used for building canvas apps. The app designer makes creating apps feel more like building a slide deck in Microsoft PowerPoint. More information: Generate an app from data

  • App designer for model-driven apps lets you define the sitemap and add components to build a model-driven app. More information: Design model-driven apps using app designer

  • Power Apps portals Studio is a WYSIWYG design tool to add and configure webpages, components, forms, and lists. More information: Power Apps portals Studio anatomy

Ready to convert your ideas into an app? Start here: Planning a Power Apps project

Power Apps for app users

You can run apps that you created, or that someone else created and shared with you, in browser or on mobile devices (phone or tablet). More information:

Power Apps for admins

Power Apps admins can use Power Platform admin center (admin.powerplatform.microsoft.com) to create and manage environments, get real-time, self-help recommendations and support for Power Apps and Power Automate, and view Dataverse analytics. More information: Administer Power Platform

Power Apps for developers

Developers are app makers who can write code to extend business app creation and customization. Developers can use code to create data and metadata, apply server-side logic using Azure functions, plug-ins, and workflow extensions, apply client-side logic using JavaScript, integrate with external data using virtual entities and webhooks, build custom connectors, and embed apps into your website experiences to create integrated solutions. More information:

Power Apps and Dynamics 365

Dynamics 365 apps, such as Dynamics 365 Sales, Dynamics 365 Customer Service, Dynamics 365 Marketing also use the underlying data platform (Dataverse) used by Power Apps to store and secure data. This enables you to build apps using Power Apps and Dataverse directly against your core business data already used within Dynamics 365 without the need for integration. More information: Dynamics 365 and Dataverse

Try Power Apps for free

You can build Power Apps for free. Simply sign-in to Power Apps. For more information, go to Sign in to Power Apps for the first time. Initially, you'll have access to the default environment.

A license is needed to play the apps made with Power Apps. You can both build and play Power Apps for free by signing up either for a 30 day trial or developer plan.

Purchase Power Apps

If you have decided to purchase Power Apps, see here for detailed information: Purchase Power Apps.

Power Apps US Government plans

Power Apps US Government consists of several plans for US government organizations to address the unique and evolving requirements of the United States public sector. The Power Apps GCC environment provides compliance with federal requirements for cloud services, including FedRAMP High, DoD DISA IL2, and requirements for criminal justice systems (CJI data types). More information: Power Apps US Government

What is the Power Platform?

Microsoft rapidly innovate, update and release new products and solutions, which can make staying on top of changes difficult. However, this frequent pace of innovation makes Microsoft technologies very exciting. One of the biggest new areas from Microsoft that you will read a lot about is the Power Platform.

Note: This article has been updated as Flow has been re-branded as Power Automate. Some visuals remain branded as Flow.

What is the Power Platform?

The ‘Power Platform’ is a collective term for three Microsoft products: Power BI, PowerApps and Power Automate (previously known as Flow). They provide the means to help people easily manipulate, surface, automate and analyse data and can be used with Office 365 and Dynamics 365 (as well as other third-party apps and other Microsoft services). The Power Platform is possible thanks to the Common Data Service (or CDS), which is essentially the underlying data platform that provides a unified and simplified data schema so that applications and services can inter-operate.

Power Platform Overview

Why is the Power Platform so important?

In this digital age, we are extremely reliant on data – and the amount of data companies are creating is continually increasing. While all this data is inevitable, it is useless unless companies gain insights and meaning from it - to gain tangible value.

Historically, data analysis, app creation or automation would be achieved by IT/Development teams. This would require staff to outline their requirements and aims, submit these requests to their IT department (or even an external partner) and then see whether it was approved and subsequently, wait for it to be built. This would be time-consuming and would use valuable resources internally or be costly if fulfilled externally. What're more, those requesting the solution would tend to have an immediate need and waiting for weeks could cause internal delays.

This is why the Power Platform is so exciting. The Power Platform enables data democratisation – the ability for digital information to be accessible to the typical (non-technical) end user. It provides three technologies that allow staff to do more with their data themselves without coding knowledge. While it doesn’t allow the intricacies and flexibility of custom coding, it does provide a simple method for most users to be able to create, automate or analyse their data in ways which have never been possible for the average worker.

PowerApps, Power Automate & Power BI explained

As mentioned, the Power Platform consists of three technologies - let's look at these three in more detail. 

 

PowerApps- Logo

PowerApps is a low-code approach to custom app development, allowing users to quickly create apps with a ‘point and click’ approach. It allows you to:

  • Build mobile-friendly apps quickly and without development knowledge and reducing pressure on busy IT teams
  • Connect to and surface data from your business applications, such as Dynamics 365 and Office 365 (and also third-party apps)
  • Surface key data into a user-friendly app to help data entry – meaning users only see the information they need to fulfil a particular task
PowerApps-Screen

Microsoft-Flow-Logo

Power Automate used to be known as Microsoft Flow. This allows you to create automated workflows between your Microsoft services or other third-party applications, which allows staff to avoid carrying out repetitive tasks and save valuable time. It allows you to:

  • Use pre-built automation templates for common automations (within the Flow Gallery)
  • Create you own automations by connecting various applications, such as Outlook, SharePoint Dynamics 365 or non-Microsoft apps like Twitter, Asana, Gmail, MailChimp etc.
  • Set up triggers, alerts, automated emails, push notifications and much more – with no coding and in minutes
  • Overall it allows you to save time, reduce human error and streamline your processes
Microsoft-Flow-Screen

Power-BI

Power BI is a business analytics tool which allows you to easily connect to data sources, create visuals and gain business intelligence quickly. It allows you to:

  • Click and connect with Microsoft and third-party cloud services, as well as on-premise data sources
  • Easily manipulate data and create visuals, such as charts, dashboards, maps and many more – so you can present your data in an easy-to-digest format
  • Use natural language to query data and get results (i.e. “show me our sales pipeline for 2018 by month, by sales person”)
  • Overall, allowing you to easily analyse and make sense of complex data to enable continual improvement
Power-BI-Dashboard

How does the Power Platform fit with the wider Microsoft strategy?

The Power Platform connects to a wide range of data sources – including third-party apps such as Google Analytics and Twitter, however it is extremely powerful when working with Office 365 and Dynamics 365. Microsoft pitch the Power Platform as a way to “unlock the potential of Dynamics 365 and Office 365 faster than you ever thought possible” so you can easily extend, customise and integrate these services.

The Power Platform is going to be a big area of investment from Microsoft and as it is still fairly new, we expect that it will be regularly updated and improved as the products mature. What’s more, as Microsoft continue to focus on bringing all their technologies closer together, we can expect the Power Platform to be a key player for this – helping connect services like Microsoft 365, Dynamics 365 and Azure easily with a low-code/no-code approach.

How do I get it? Power Platform licensing and pricing

There is not currently a licensing bundle for all three products together. Instead you can purchase the three separately (and can mix and match) - or, if you haven't tried them you can use free versions or free trials to test their capabilities. 

PowerApps and Power Automate licensing is fairly complex. You can read the latest licensing guidance here: https://docs.microsoft.com/en-us/power-platform/admin/powerapps-flow-licensing-faq If you have any questions around licensing and pricing we recommend speaking to a Microsoft Partner. 

Power BI – There are three versions available, Free, Pro and Premium, which you can compare here: Power BI Pricing (Microsoft).

  • Free
  • Pro - note, required for sharing Power BI reports and dashboards
  • Premium (POA)

 

3 JUN 2019 - LISA CURRY

What’s included in the Microsoft Modern Workplace?

As businesses evolve it is very easy for their business technology and processes to quickly become out of date and potentially result in inefficient operations that predominantly affect customer experience and profit margins.

The Workplace Evolution published by Harvard Business Review found that 78% of senior executives in enterprise businesses believe fostering a modern workplace strategy is essential. However, only 31% think their company is forward-thinking enough to do so.

Changing a fixed mindset into a forward-thinking one within any business is no easy task, and with the implications of remote working due to the pandemic, many businesses have undergone rapid transformation over the past few months by implementing Modern Workplace technology within their businesses. This enables their employees to be supported by the leading technology that enables them to work smarter and be more productive, collaborate with remote teams and reap the other many benefits that Microsoft 365 offers.

What’s included in the Microsoft Modern Workplace?

The Microsoft Modern Workplace is one that operates using the suite of Microsoft 365 technologies and productivity applications that harness the power of the Cloud.

Microsoft Modern Workplace applications improve employee productivity and satisfaction and create more seamless communication across the business whilst promoting collaboration and maintaining the security and integrity of systems and data.

 

What is a Modern Workplace?

The definition of a Modern Workplace is an operational setup which has been professionally designed to meet both the physical and technological needs of both your business and its employees.

A Modern Workplace drives company-wide business transformation by utilising the latest Microsoft technology to power and streamline business operations and empower employees to do their best work around the clock.

What’s included within the Microsoft Modern Workplace?

Microsoft 365, includes Office 365, Windows 10 Enterprise, and Enterprise Mobility + Security and a variety of productivity and collaboration tools designed to support modern ways of working, help facilitate digital transformation, and most importantly keep your business secure.

How does the Microsoft Modern Workplace facilitate automation?

Within Microsoft 365 is Microsoft Power Automate (formerly Flow) which enables you to implement both Digital and Robotic Process Automation (RPA) across the business. By automating tasks you can quickly boost productivity, giving employees more time to focus on innovation than administrative tasks. You can also automate time-consuming tasks using the built-in AI capabilities and integrate with over 100 applications such as Microsoft Dynamics 365, Twitter, MailChimp, Google Analytics etc.

How does the Microsoft Modern Workplace promote collaboration?

When internal and external teams come together and collaborate to solve a problem the magic happens. Facilitating collaboration can prove tricky especially with remote/shift workers or multi-site offices. However, the collaboration applications included within the Microsoft Modern Workplace such as Microsoft Teams, Microsoft Teams for Education, Office 365 and SharePoint enable employees to easily work together on documents no matter their location or time of day. Windows 10 Virtual Desktop takes this even further by allowing users to access their desktop from any device.

Microsoft Teams in particular is constantly evolving and adding new features to improve accessibility and user experience.

This boosts productivity, can raise morale and enables people within different teams to come together. The more flexible it is for employees to collaborate the more collaboration will occur.

How secure is the Microsoft Modern Workplace?

Within Microsoft 365, the security stack gives us the insights to proactively defend against advanced threats, such as malware, phishing, and zero-day attacks as well as identity, app, data, and device protection with Azure Active Directory, Microsoft Intune, and Windows Information Protection.

What are the reasons to adopt a Modern Workplace?

Here are some common reasons why businesses choose to adopt the Microsoft Modern Workplace over a traditional setup:

Disparate communication channels – employees communicate through a variety of different channels which enables messages to get lost in translation, there is limited visibility and a lack of control.

Stand-alone platforms – business management platforms are implemented that have very limited integration and automation with each other. This makes the risk of error high and increases workload.

Data/Team Silos – Teams work in silos with limited visibility to other parts of the business. Data is stored in numerous locations and access is restricted due to the setup.

Hardware – Desktop PCs are commonly used, not all employees have access to laptops or tablets which disables remote working.

Technology – The technology you have implemented has scalable limitations and does not completely meet the needs of your employees.

Security – There is limited or no device or cyber security management in place putting your business at risk in the event of a breach.

 

 Ref: https://www.microsoft.com/en-us/itshowcase/microsoft-365 

 

1C:Enterprise in the cloud

The concept of cloud services for business applications is as simple as moving the application servers from the on-premises network to the Internet. The end users continue working with the same software (either the native client or the web client); the only thing required is an Internet connection. They no longer need to log on to the local enterprise network (directly or through VPN). Moreover, if the enterprise uses the SaaS model, the end users do not need to worry about software administration and updates any longer—the cloud service provider hosting your application servers will manage these tasks.

fbec4c7ee596fba90ee8190d6538bafc.jpg
Eye catcher image: the author of this article illustrates the "1C:Enterprise in the cloud" concept by using simple objects: clouds, banner, aircraft, parachute.

1C:Enterprise applications support both HTTP and HTTPS connections, making for a seamless transition of 1C:Enterprise application servers to the Internet. That's all you need to create a basic 1C:Enterprise cloud solution.
31c433669d056541bb228bff95bd1726.png
The only difference between this scenario and the on-premises installation is the location of the application server. To provide a service to a new company, one needs at least a new 1C:Enterprise infobase and a physical computer or a virtual machine running 1C:Enterprise. Therefore, application administration costs grow with the number of companies that utilize the service.

Multitenancy and data separation

To reduce application administration costs, you can now have multiple companies work with a single application instance. Of course, the application must be designed for shared work. And the application business logic in the shared work scenario must be identical to the business logic in the scenario where each company uses its own application.

This application architecture type is known as multitenancy. We can explain the multitenancy concept using the following example: a regular application is a house that provides its infrastructure (walls, roof, water system, heating system, etc.) to a family that lives there. A multitenant application is a house with multiple apartments, each apartment having access to the same shared infrastructure that is implemented at the basic house level.

In the simplest terms, the goal of multitenancy is reducing application maintenance costs by pushing the infrastructure maintenance to a higher level. It is similar to reducing costs by selling standard out-of-box applications (that might require some customization) instead of writing applications for specific customers from scratch. The difference is that out-of-box solutions reduce the development cost, while cloud solutions reduce the maintenance cost.

One of the multitenancy aspects is the separation of application data. An application stores data of all companies in a single database, however, each company can only access its own data (such as business documents and employee lists). Some reference data (such as legislation and regulations) can be made available to all companies. 1C:Enterprise applications can use the data separation functionality provided by the 1C:Enterprise platform.

Cloud services

Recently, a group of 1C:Enterprise developers faced the task of developing a cloud service for leasing 1C:Enterprise applications based on SaaS model. Moreover, the service needed to be an out-of-box solution, a "cloud in the box" including everything the customer may need to deploy infrastructure for leasing 1C:Enterprise applications (or individual applications based on 1C:Enterprise).

So what is an ideal cloud service from the end user's point of view? A store with shelves filled with solutions: accounting, reporting, payroll and human resources, and so on. A customer fills their cart and pays at the cash desk (however, they pay a rent instead of a one-time payment). Then the customer decides which of their employees will have access to accounting, payroll, and HR, as well as other solutions.

What is an ideal cloud service from the service provider's point of view? Basically, it's a large store they own. They have to fill the shelves with goods (software), add new goods, and make sure the customers pay promptly. The service must also provide horizontal scalability, access to solution demos (test drive), and centralized user administration tools.

Of course one can implement all this functionality directly in the applications. However, this means duplicating a large amount of code. It is better to optimize the solutions by implementing their common functionality and administration tools in a product that will serve as an entry point for users of cloud services.
This is how 1cFresh technology was developed. 1C customers and partners use it in their SaaS services and private clouds. 1C Company has its own application lease service based on 1cFresh: 1cFresh.com and 1C:AccountingService (both in Russian).

The service functionality is divided between the following major components based on 1C:Enterprise and Java technologies:
  • Service website. A single entry point for all users.
  • Service manager. An administration and coordination tool governing all service components.
  • Application gateway. The component that provides horizontal scalability.
  • Service agent. The component that provides all utility functions, such as application version updates or backup creation.
  • Service forum. A forum for service users and service provider representatives.
  • Availability manager. The "Service temporarily unavailable" board that informs users about service unavailability or the unavailability of service parts, the board itself is available even if central service components have failed.
e866fde6eeece643ebe8369b4131e603.png
Simplified 1cFresh component chart (with some components omitted)

Let us review the major service components in detail.

Service website

The site that provides the interface for service users is written in Java. It serves as a store shelf where users can choose applications to rent and try their demos. In addition to that, it is where users register, create application user accounts, read news and browse service online help. The 1cFresh.com page (in Russian) is exactly the "out-of-box" website, without any customizations.

A service can include any number of 1C:Enterprise server clusters that run 1C:Enterprise applications. Each cluster is registered in the service manager. 1C:Enterprise servers can run on both Windows and Linux computers. For example, the service at 1cFresh.com uses both Windows servers (with MS SQL Server DBMS for storing application data) and Linux servers (with PostgreSQL).

The cloud service administrators access the service via the service manager user interface. They use it to add 1C:Enterprise servers and applications, update application versions, manage user accounts, and perform other administrative tasks. Some of the operations, such as application updates, are delegated to the service agent component. The service manager communicates with the service agent via a web service.

Service agent

The service agent is a 1C:Enterprise application. It performs administrative operations on the service infobases, which include application version updates, scheduled backups, and gathering service operation statistics.

Application gateway

The application gateway is written in Java. It is responsible for the horizontal scalability of the service. It redirects service users to appropriate application servers.

Service forum

The service forum is a location where service users and service provider representatives can discuss the service and the applications available in that service. It is written in Java.

Availability manager

Some service features or even the entire service might be temporarily unavailable to end users. For example, an application is usually unavailable to end users while it is being updated, or the entire service might be unavailable during maintenance hours. The availability manager is a 1C:Enterprise application that displays messages about the unavailability of service resources to website and forum users even if all other service components, including the central service manager component, are not available.

1C:Enterprise infobases

1C:Enterprise infobases store application data. New infobases are added to serve as parts of scalability units. Each scalability unit is deployed as a single module and contains the following parts:
  • 1C:Enterprise server cluster
  • DBMS server that stores infobase data
  • One or two web servers (two ensure fault tolerance) that process HTTP requests to infobases belonging to the scalability unit
A scalability unit failure only affects the customers that work with the infobases belonging to the unit.

More service facts
  • The service supports a technology similar to OpenID that allows storing user authentication data in a single database. Therefore, you can set up Single Sign-On for the service and its users will be able to access all their applications (for example, accounting or payroll calculation) and the forum with a single user name and password.
  • Users can transfer local 1C:Enterprise application data (for example, accounting records) to the cloud and back.
  • Users can create standalone workstations (file infobases stored at their local computers). They do not need the Internet or service connections to work with such infobases. At the same time, they can use the service data exchange functionality to synchronize their local data with the cloud.
  • Users can set up automatic data exchange between the applications published in the service (for example, between accounting and payroll calculation applications). This minimizes the efforts required to input data because data entered into one application becomes available for all other applications.
  • A backup creation system is available. A user can initiate backup creation at any time, or schedule daily, monthly, and yearly backups.
  • The 1cFresh technology includes the data delivery feature. The service manager stores reference data that is always up-to-date and provides this data to all applications.
  • A service can run multiple versions of any application. These applications can use multiple 1C:Enterprise platform versions.
  • Users can update the applications that they use to access their infobases.
  • 1cFresh features the following tools for error identification and analysis:
    • Gathering infobase error data.
    • Writing this data to the error log of the service manager infobase.
    • Viewing error details. A service administrator can view the entire error log or filter it by infobase or application.
  • 1cFresh includes the "showcase" feature, which provides the option to run multiple cloud services on a single platform. A showcase is an Internet resource that provides services. From the user’s point of view, a showcase is an independent website with business applications. For example, a single website platform that belongs to a service provider can run several sites located in different domains, one featuring a showcase of small business applications, another with applications for public institutions, the third one with applications for medical institutions, and so on. Also, service providers can advertise each resource as an independent service.
  • The service includes a Feedback center where users can submit their feedback and feature requests. It is implemented as an application module that features a list of user posts and comments to these posts, voting for posts and commenting on them, as well as submitting feedback and feature requests. To include this functionality in an application, an administrator simply enables the subsystem where it is implemented.
  • Subscribers of 1cFresh-based services pay a subscription fee to the service provider. Flexible pricing options are available.
  • The service provides a wide range of options for viewing its usage statistics. One can use the statistics to determine the service load, obtain average key indicators of stable service work (for future evaluation of possible deviations), determine the periods of minimum and maximum service load (for planning scheduled maintenance), and more.
  • The option to gather application business statistics is available. Application developers can use the statistics to improve their understanding of application usage scenarios and to identify bottlenecks.

Applications compatible with cloud services

To be able to run in the cloud, 1C:Enterprise applications must meet the SaaS mode requirements. For the detailed list of requirements, see 1cFresh documentation.

The requirements include the use of data separation functionality, as well as implementations of remote administration functions, data import and export, backup generation, and more. Cloud applications must offer identical behavior between the thin client and the web client, they must not include OS-dependent features (because a cloud server might run either Windows or Linux), lengthy server calls are not recommended, and so on.

Cloud applications can have mobile application clients developed using the mobile 1C:Enterprise platform.

Cloud application customization

Thousands of users from hundreds of companies can work with a single cloud application instance. They might require custom application features. Thus, service providers need tools to customize applications for certain user groups.

1cFresh provides two kinds of application customization tools:
  • External reports and external data processors. These customization tools, well known to 1C:Enterprise application users, were enhanced for cloud operations.
  • Configuration extensions. Extensions are plug-ins that add functionality to applications without changing them. Currently, extensions do not support all of the configuration objects, however, we are working on making this function available.

Summary

We believe cloud service development to be a promising trend worthy of significant resource investment.
1cFresh cloud service fully complies with the cloud service definition provided by IDC:

94fd87240eb58fec1d1ec8f11f5eb0e1.png
According to the Gartner definition, 1cFresh service type is an Application Platform that operates as a Service (aPaaS): "Application Platform as a Service (aPaaS) is a form of PaaS that provides a platform to support app development, deployment and execution in the cloud." (source)

 

1C Developer team

28.06.2019 

Train a deep learning image classification model with ML.NET and TensorFlow

This sample may be downloaded and built directly. However, for a succesful run, you must first unzip assets.zip in the project directory, and copy its subdirectories into the assets directory.

Source Code - Click to download

Understanding the problem

Image classification is a computer vision problem. Image classification takes an image as input and categorizes it into a prescribed class. This sample shows a .NET Core console application that trains a custom deep learning model using transfer learning, a pretrained image classification TensorFlow model and the ML.NET Image Classification API to classify images of concrete surfaces into one of two categories, cracked or uncracked.

 

Dataset

The datasets for this tutorial are from Maguire, Marc; Dorafshan, Sattar; and Thomas, Robert J., "SDNET2018: A concrete crack image dataset for machine learning applications" (2018). Browse all Datasets. Paper 48. https://digitalcommons.usu.edu/all_datasets/48

SDNET2018 is an image dataset that contains annotations for cracked and non-cracked concrete structures (bridge decks, walls, and pavement).

The data is organized in three subdirectories:

  • D contains bridge deck images
  • P contains pavement images
  • W contains wall images

Each of these subdirectories contains two additional prefixed subdirectories:

  • C is the prefix used for cracked surfaces
  • U is the prefix used for uncracked surfaces

In this sample, only bridge deck images are used.

Prepare Data

  1. Unzip the assets.zip directory in the project directory.
  2. Copy the subdirectories into the assets directory.
  3. Define the image data schema containing the image path and category the image belongs to. Create a class called ImageData.
C#
 
class ImageData
{
    public string ImagePath { get; set; }

    public string Label { get; set; }
}
  1. Define the input schema by creating the ModelInput class. The only columns/properties used for training and making predictions are the Image and LabelAsKey. The ImagePath and Label columns are there for convenience to access the original file name and text representation of the category it belongs to respectively.
C#
 
class ModelInput
{
    public byte[] Image { get; set; }
    
    public UInt32 LabelAsKey { get; set; }

    public string ImagePath { get; set; }

    public string Label { get; set; }
}
  1. Define the output schema by creating the ModelOutput class.
C#
 
class ModelOutput
{
    public string ImagePath { get; set; }

    public string Label { get; set; }

    public string PredictedLabel { get; set; }
}

Load the data

  1. Before loading the data, it needs to be formatted into a list of ImageInput objects. To do so, create a data loading utility method LoadImagesFromDirectory.
C#
 
public static IEnumerable<ImageData> LoadImagesFromDirectory(string folder, bool useFolderNameAsLabel = true)
{
    var files = Directory.GetFiles(folder, "*",
        searchOption: SearchOption.AllDirectories);

    foreach (var file in files)
    {
        if ((Path.GetExtension(file) != ".jpg") && (Path.GetExtension(file) != ".png"))
            continue;

        var label = Path.GetFileName(file);

        if (useFolderNameAsLabel)
            label = Directory.GetParent(file).Name;
        else
        {
            for (int index = 0; index < label.Length; index++)
            {
                if (!char.IsLetter(label[index]))
                {
                    label = label.Substring(0, index);
                    break;
                }
            }
        }

        yield return new ImageData()
        {
            ImagePath = file,
            Label = label
        };
    }
}
  1. Inside of your application, use the LoadImagesFromDirectory method to load the data.
C#
 
IEnumerable<ImageData> images = LoadImagesFromDirectory(folder: assetsRelativePath, useFolderNameAsLabel: true);
IDataView imageData = mlContext.Data.LoadFromEnumerable(images);

Preprocess the data

  1. Add variance to the data by shuffling it.
C#
 
IDataView shuffledData = mlContext.Data.ShuffleRows(imageData);
  1. Machine learning models expect input to be in numerical format. Therefore, some preprocessing needs to be done on the data prior to training. First, the label or value to predict is converted into a numerical value. Then, the images are loaded as a byte[].
C#
 
var preprocessingPipeline = mlContext.Transforms.Conversion.MapValueToKey(
        inputColumnName: "Label",
        outputColumnName: "LabelAsKey")
    .Append(mlContext.Transforms.LoadRawImageBytes(
        outputColumnName: "Image",
        imageFolder: assetsRelativePath,
        inputColumnName: "ImagePath"));
  1. Fit the data to the preprocessing pipeline.
C#
 
IDataView preProcessedData = preprocessingPipeline
                    .Fit(shuffledData)
                    .Transform(shuffledData);
  1. Create train/validation/test datasets to train and evaluate the model.
C#
 
TrainTestData trainSplit = mlContext.Data.TrainTestSplit(data: preProcessedData, testFraction: 0.3);
TrainTestData validationTestSplit = mlContext.Data.TrainTestSplit(trainSplit.TestSet);

IDataView trainSet = trainSplit.TrainSet;
IDataView validationSet = validationTestSplit.TrainSet;
IDataView testSet = validationTestSplit.TestSet;

Define the training pipeline

C#
 
var classifierOptions = new ImageClassificationTrainer.Options()
{
    FeatureColumnName = "Image",
    LabelColumnName = "LabelAsKey",
    ValidationSet = validationSet,
    Arch = ImageClassificationTrainer.Architecture.ResnetV2101,
    MetricsCallback = (metrics) => Console.WriteLine(metrics),
    TestOnTrainSet = false,
    ReuseTrainSetBottleneckCachedValues = true,
    ReuseValidationSetBottleneckCachedValues = true,
    WorkspacePath=workspaceRelativePath
};

var trainingPipeline = mlContext.MulticlassClassification.Trainers.ImageClassification(classifierOptions)
    .Append(mlContext.Transforms.Conversion.MapKeyToValue("PredictedLabel"));

Train the model

Apply the data to the training pipeline.

 
ITransformer trainedModel = trainingPipeline.Fit(trainSet);

Use the model

  1. Create a utility method to display predictions.
C#
 
private static void OutputPrediction(ModelOutput prediction)
{
    string imageName = Path.GetFileName(prediction.ImagePath);
    Console.WriteLine($"Image: {imageName} | Actual Value: {prediction.Label} | Predicted Value: {prediction.PredictedLabel}");
}

Classify a single image

  1. Make predictions on the test set using the trained model. Create a utility method called ClassifySingleImage.
C#
 
public static void ClassifySingleImage(MLContext mlContext, IDataView data, ITransformer trainedModel)
{
    PredictionEngine<ModelInput, ModelOutput> predictionEngine = mlContext.Model.CreatePredictionEngine<ModelInput, ModelOutput>(trainedModel);

    ModelInput image = mlContext.Data.CreateEnumerable<ModelInput>(data,reuseRowObject:true).First();

    ModelOutput prediction = predictionEngine.Predict(image);

    Console.WriteLine("Classifying single image");
    OutputPrediction(prediction);
}
  1. Use the ClassifySingleImage inside of your application.
C#
 
ClassifySingleImage(mlContext, testSet, trainedModel);

Classify multiple images

  1. Make predictions on the test set using the trained model. Create a utility method called ClassifyImages.
C#
 
public static void ClassifyImages(MLContext mlContext, IDataView data, ITransformer trainedModel)
{
    IDataView predictionData = trainedModel.Transform(data);

    IEnumerable<ModelOutput> predictions = mlContext.Data.CreateEnumerable<ModelOutput>(predictionData, reuseRowObject: true).Take(10);

    Console.WriteLine("Classifying multiple images");
    foreach (var prediction in predictions)
    {
        OutputPrediction(prediction);
    }
}
  1. Use the ClassifyImages inside of your application.
C#
 
ClassifySingleImage(mlContext, testSet, trainedModel);

Run the application

Run your console app. The output should be similar to that below. You may see warnings or processing messages, but these messages have been removed from the following results for clarity. For brevity, the output has been condensed.

Bottleneck phase

text
 
Phase: Bottleneck Computation, Dataset used:      Train, Image Index: 279
Phase: Bottleneck Computation, Dataset used:      Train, Image Index: 280
Phase: Bottleneck Computation, Dataset used: Validation, Image Index:   1
Phase: Bottleneck Computation, Dataset used: Validation, Image Index:   2

Training phase

text
 
Phase: Training, Dataset used: Validation, Batch Processed Count:   6, Epoch:  21, Accuracy:  0.6797619
Phase: Training, Dataset used: Validation, Batch Processed Count:   6, Epoch:  22, Accuracy:  0.7642857
Phase: Training, Dataset used: Validation, Batch Processed Count:   6, Epoch:  23, Accuracy:  0.7916667

Classification Output

text
 
Classifying single image
Image: 7001-220.jpg | Actual Value: UD | Predicted Value: UD

Classifying multiple images
Image: 7001-220.jpg | Actual Value: UD | Predicted Value: UD
Image: 7001-163.jpg | Actual Value: UD | Predicted Value: UD
Image: 7001-210.jpg | Actual Value: UD | Predicted Value: UD
Image: 7004-125.jpg | Actual Value: CD | Predicted Value: UD
Image: 7001-170.jpg | Actual Value: UD | Predicted Value: UD
Image: 7001-77.jpg | Actual Value: UD | Predicted Value: UD

Improve the model

  • More Data: The more examples a model learns from, the better it performs. Download the full SDNET2018 dataset and use it to train.
  • Augment the data: A common technique to add variety to the data is to augment the data by taking an image and applying different transforms (rotate, flip, shift, crop). This adds more varied examples for the model to learn from.
  • Train for a longer time: The longer you train, the more tuned the model will be. Increasing the number of epochs may improve the performance of your model.
  • Experiment with the hyper-parameters: In addition to the parameters used in this tutorial, other parameters can be tuned to potentially improve performance. Changing the learning rate, which determines the magnitude of updates made to the model after each epoch may improve performance.
  • Use a different model architecture: Depending on what your data looks like, the model that can best learn its features may differ. If you're not satisfied with the performance of your model, try changing the architecture.

Ref: https://docs.microsoft.com/en-us/samples/dotnet/machinelearning-samples/mlnet-image-classification-transfer-learning/ 

ML.NET and Model Builder

ML.NET is an open-source, cross-platform machine learning framework for .NET developers. It enables integrating machine learning into your .NET apps without requiring you to leave the .NET ecosystem or even have a background in ML or data science.

We are excited to announce new versions of ML.NET and Model Builder!

In this post, we’ll cover the following items:

  1. Model Builder Preview
  2. ML.NET v1.5.5
  3. Virtual ML.NET Community Conference
  4. Feedback
  5. Get started and resources

Model Builder Preview

This preview brings a lot of big changes to Model Builder, and we’re excited to get your feedback on all the new features which include:

  • Config-based training with generated code-behind files
  • Restructured Advanced Data Options
  • Redesigned Consume step

You can sign up for the Preview at aka.ms/blog-mb-preview.

Config-based training with generated code-behind files

The Model Builder experience has been revamped! Now when you right-click on your project in Solution Explorer and Add > Machine Learning, the Add New Item Dialog opens, and you can add an ML.NET Model.

New Item Dialog in Visual Studio

After adding your model, the Model Builder UI opens, and a new item (an *.mbconfig file) shows up in the Solution Explorer.

Model Builder UI in Visual Studio

Close up of Solution Explorer in Visual Studio

At any point when using Model Builder, if you close out of the UI, you can double click on the *.mbconfig in Solution Explorer, and it will open the UI again to your last saved state.

After training, two files are generated under the *.mbconfig file:

Solution Explorer expanded with mbconfig in Visual Studio

  • Model.consumption.cs: This file contains the Model Input and Model Output schemas as well as the Predict function generated for consuming the model.
  • Model.training.cs: This file contains the training pipeline (data transforms, algorithm, algorithm hyperparameters) chosen by Model Builder to train the model. You can use this pipeline for re-training your model.
  • Model.zip: This is a serialized zip file which represents your trained ML.NET model.

Previously, these files were added as two new projects (a class library for model consumption code and a console app for the training pipeline). The new experience is similar to adding a new form in a Windows Forms application, where there are code-behind files behind the form and double clicking the form opens the designer.

If you open the *.mbconfig file, you can see that it is simply a JSON file with state information:

{
  "TrainingConfigurationVersion": 0,
  "TrainingTime": 10,
  "Scenario": {
    "ScenarioType": "Classification"
  },
  "DataSource": {
    "DataSourceType": "TabularFile",
    "FileName": "C:\Desktop\Datasets\yelp_labelled.txt",
    "Delimiter": "t",
    "DecimalMarker": ".",
    "HasHeader": true,
    "ColumnProperties": [
      {
        "ColumnName": "Comment",
        "ColumnPurpose": "Feature",
        "ColumnDataFormat": "String",
        "IsCategorical": false
      },
      {
        "ColumnName": "Sentiment",
        "ColumnPurpose": "Label",
        "ColumnDataFormat": "String",
        "IsCategorical": true
      }
    ]
  },
  "Environment": {
    "EnvironmentType": "LocalCPU"
  },
  "Artifact": {
    "Type": "LocalArtifact",
    "MLNetModelPath": "C:\source\repos\ConsoleApp8\ConsoleApp8\MLModel1.zip"
  },
  "RunHistory": {
    "Trials": [
      {
        "TrainerName": "AveragedPerceptronOva",
        "Score": 0.8059,
        "RuntimeInSeconds": 4.4
      }
    ],
    "Pipeline": "[{"EstimatorType":"MapValueToKey","Name":null,"Inputs":["Sentiment"],"Outputs":["Sentiment"]},{"EstimatorType":"FeaturizeText","Name":null,"Inputs":["Comment"],"Outputs":["Comment_tf"]},{"EstimatorType":"CopyColumns","Name":null,"Inputs":["Comment_tf"],"Outputs":["Features"]},{"EstimatorType":"NormalizeMinMax","Name":null,"Inputs":["Features"],"Outputs":["Features"]},{"LabelColumnName":"Sentiment","EstimatorType":"AveragedPerceptronOva","Name":null,"Inputs":null,"Outputs":null},{"EstimatorType":"MapKeyToValue","Name":null,"Inputs":["PredictedLabel"],"Outputs":["PredictedLabel"]}]",
    "MetricName": "MicroAccuracy"
  }
}

This new Model Builder experience brings many benefits. You can:

  • Specify the name of your model and generated code.
  • Have more than one Model Builder-generated model in a solution.
  • Save your state and come back to the last saved state. If you spend an hour training and close out of Model Builder, now you don’t have to start over and can just pick up where you left off.
  • Share the *.mbconfig file and collaborate on the same Model Builder instance via source control.
  • Use the same *.mbconfig file in Model Builder and the ML.NET CLI (coming soon!).

Restructured Advanced Data Options

In the last Model Builder release, we added advanced data options for data loading which gave you more control over column settings and data formatting.

In this release, we added several more options and reorganized the options to make selecting your column settings even easier:

  • Purpose: Choose whether the column is a Feature column, a Label column, or a column to Ignore during training.
  • Data type: Choose whether the data in the column is a String, Single, or Boolean.
  • Categorical: Choose whether the column is categorical or not.

Advanced Data Options in Model Builder

Redesigned Consume Step

We have redesigned the consume step to make a smooth transition from training and evaluating a model to using that model to make predictions in an end-user application.

A code snippet has been provided in the UI which demonstrates how to set up the Model Input as well as how to use the generated Predict function to return the predicted output.

Each Model Input property is filled in with sample data from the first row of your dataset. You can use the copy button in the top right of the box to copy the entire code snippet; then once you paste this code into your end-user application, you can modify the Model Input fields to get real data to feed into your model.

Consume step in Model Builder

Additionally, there is a new Sample project section which generates an application that uses your model and adds the project to your solution. In previous versions of Model Builder, a sample console app was automatically added to your solution; now you can choose whether you want to add a new project to use your model.

Currently, there is only the option to add a console app, but in the future, we plan to add support for Web APIs, Azure Functions, and more.

ML.NET v1.5.5

This release of ML.NET brings numerous bug fixes and enhancements as well as the following new features:

  • New API that accepts double type for the confidence level which helps when you need to have higher precision than an int will allow for. Thank you @esso23 for your contributions!
  • Support for export ValueMapping estimator to ONNX.
  • New API to specify if the output from TensorFlow is batched or not (previously ML.NET always assumed it was a batch amount which caused errors when that wasn’t true).

Check out the release notes for more details.

Virtual ML.NET Community Conference

On May 7th, the 2nd annual Virtual ML.NET Community Conference will kick off with 2 days of sessions on all things ML.NET, and we’re looking for speakers to talk about:

  • MLOps
  • Case studies and real-life use cases
  • Interactive computing with Jupyter
  • ML.NET interop (ONNX)
  • ML.NET and IoT devices
  • ML.NET in F#
  • Big Data and ML.NET
  • A journey from experimentation to production
  • Anything else ML.NET related you can think of!

This is a 100% free event, by the community, for the community.

Published By:

Bri Achtman - Program Manager, .NET

March 15th, 2021

The Blockchain Explained to Web Developers, Part 3: The Truth

Chains

After exploring the blockchain theory and using it for real, we now have a better understanding of its strengths and weaknesses. Surprisingly, most of our conclusions are very different from what you will read in the blogosphere. Maybe it’s because we don’t blindly relay the fascination caused by the huge valuations of BitCoin and others. Maybe it’s because the hard truth about the blockchain is that it’s not ready yet. Read on to understand our take on the blockchain, based on strong evidence.

The Technology Is Not Mature Enough

As explained in detail in the previous post in this series, developing Decentralized Apps over a blockchain is a pain. The development community is small, available code snippets don’t work, public tutorials are outdated, the libraries are crippled with bugs, developer tooling is lacking, bugs are silent, etc.

It’s not that the Ethereum developers and community are bad ; they’re amazing, and they’re pouring a lot of time and expertise into their tools. But building a blockchain framework is a huge amount of work, and they’re only halfway through. Ethereum hasn’t reached the point of usability yet. I’m confident that this will change in the future, but I don’t know if it’s a matter of months or years.

Tip: We haven’t developed DApps for Bitcoin, but I’ve heard it’s worse. Instead of using a JavaScript-like language (Solidity) for smart contracts, you must use an assembly-like language, which isn’t even Turing-complete. Yikes.

The consequence is that developers don’t want to work on blockchain projects - they find it very frustrating. If you force them to work with a technology they hate, they will leave. Since it’s extremely hard to find skilled developers these days, you should think twice before taking a chance on the blockchain.

The second consequence is that it’s impossible to estimate the time it will take to build a project on the blockchain. If you can’t estimate your costs, good luck building a Business Model on the blockchain.

Smart Contracts Can’t Call APIs

In our blockchain experimentation, everything a bit “smart” in the contract had to be moved to a plain old web service running outside of the blockchain, in a trusted environment. For instance, a smart contract can’t figure out if the person asking for an ad placement is the author of a pull request, because a smart contract can’t call the GitHub API. As a consequence, our smart contract keeps only a very minimal amount of logic, becoming, in fact, a dumb contract. It’s not because we wanted to, it’s because we couldn’t do otherwise.

By design, a blockchain is deterministic. That means that if you take the entire history of blocks, and replay it locally, you should end up with the same state as every other node. This forbids the call to external APIs, where responses may change over time, or according to who calls them.

Blockchains are walled gardens. You can execute a contract from the outside world, but a contract itself can’t require data from a source outside of the blockchain. If a smart contract needs external data, someone must push the data to the blockchain first. There is an effort to ease this process through a concept called Oracles. But Oracles need a reputation system and governance. So much for fully-automated contracts and disintermediation.

In the real world, very few applications work in isolation. All the applications we’ve built for the past 3 years relied on external APIs - for identity management, payment, live data source, image processing, storage, etc. The limited capabilities of smart contracts make them useless in real world situations.

You Need A PhD to Understand It

If you read through the first blog post of this series, you probably think that you have a good basic understanding of the blockchain. Now, go and read this article. I’m an average engineer with only 20 years of experience in Web Development, and I couldn’t understand anything after the Jurassic Park reference. Terms like “two-way-pegged blockchains”, “pre-determined Host Oracle Contract”, and sentences like “The M-S result, combined with our inability to feed (non-BB) a revelation mechanism, means that Oracles are out” make me fell like a first grader.

The blockchain concept is complex. Existing implementations rely on rare design patterns, that you don’t learn in college. The blockchain vocabulary is kabbalistic.

Developing decentralized apps on top of blockchains requires understanding too many complicated concepts to fit in an average developer’s brain. My opinion is that there are not enough highly skilled programmers to support the revolution promised by the blockchain. And there will never be, as long as it’s so hard to understand.

As a consequence, most Decentralized apps are very buggy. A recent article stated that smart contracts contain 1 bug every 10 lines of code, making Ethereum “candy for hackers”. It wouldn’t be such a big deal if fixing bugs was easy. Unfortunately, as we explained in the previous post, you can’t update a smart contract. You have to create a new contract, transfer all the data and pointers from the old contract to the new one, and wait for the blockchain to propagate the change. The old buggy contracts and transactions remain in the blockchain forever.

Developer Power

The blockchain authors suggest using “code as law”. This also means “bugs as law”, as every software contains bugs. These bugs can be used by smart developers (criminals, the NSA, etc.) to avoid playing by the rules. Bugs are very common, even in popular open-source projects. Bitcoin, for instance, suffered several critical bugs leading to “cybertheft”. So leaving the keys to developers also means giving extraordinary power to the mean developers.

I don’t want to go all FUD (Fear, Uncertainty and Doubt) on you, but the possible scenarios of a society governed by machines don’t all finish with a happy ending in my mind.

When machines control the world

And even if we don’t consider mean developers, giving the power to good developer is dangerous, too. The problem is that developers are irresponsible (no harm intended - I’m a developer myself). It’s not that they’re childish, it’s that nobody ever taught them to write the law.

Also, developers are not elected by the people. If you don’t agree with the direction that Bitcoin takes (favoring speculation rather than practical applications) too bad for you - there is nothing you can do to change that. This is currently happening: the Bitcoin network currently suffers a severe crisis, because of the disagreements between a few core developers.

The decisions of half a dozen developers may cause the collapse of a billion dollar market capitalization. But nobody will hold them accountable in case of failure.

Waste of Resources

A blockchain is not cost-effective at all. In fact, it’s a huge waste of resources.

Take data replication for instance. The blockchain replicates all transactions across all nodes. Engineers have long invented replication strategies with better space efficiency. Compare the Blockchain with RAID6 disk clustering for instance:

In a Blockchain network, 10 nodes of 1GB each allow for a total replicated data volume of 1GB. You can loose up to 9 nodes in the network, and yet be able to recover the entire data.

In a RAID6 pool, 10 hards disks of 1GB each allow for a total replicated data volume of 8GB. You can loose up to 2 HDD in the pool, and yet be able to recover the entire data.

Mining nodes require very expensive hardware, with high end GPU cards and a huge amount of memory.

And it’s not just about buying expensive hardware. 99.99% of the computing is just wasted. All miners compete to mine a block by running expensive Math challenges. In Bitcoin, only one node every 10 minutes wins, and is actually useful to the chain by creating a block. The computation done by all the other nodes is thrown away.

The Ethereum blockchain is trying to fix that: they plan to switch from a proof-of-work consensus algorithm to a proof-of-stake, which is much less resource intensive. But proof-of-stake also has drawbacks, such as giving more power to people or companies owning high amounts of cryptocurrency. Besides, it’s far from ready yet (expect it in at least a year from now).

When machines control the world

This waste of storage, CPU and memory translates into a huge waste of energy. According to a bitcoin mining-farm operator, energy consumption totaled 240kWh per bitcoin in 2014 (the equivalent of 16 gallons of gasoline). Mining farms are a distributed engine turning electricity into heat. A blockchain is, in short, an expensive radiator. Energy efficiency is a big deal in a globally warming planet.

Very Expensive

Who pays for all the wasted energy? The companies that publish and use smart contracts. Yes, that’s you, if you intend to run a business on the blockchain. When you pay for a transaction on the blockchain, you also pay 99.99% of the network running at full speed for nothing. That makes blockchain transactions expensive.

A million dollars in bank notes

An average BitCoin transaction requires a fee of BTC 0.0002 ($0.11). This price is rising. It’s not really cheaper than a bank transaction fee (unless you consider a transfer across two countries with different currencies, of course).

For ZeroDollarHomepage, executing a 10-lines script on Ethereum method costs about one cent ($0.01). That’s insanely expensive. Amazon Lambda, for instance, costs $0.0000002 per request (after the first million requests each month).

It’s normal to pay for hosting costs when you use a platform, but the Blockchain costs are orders of magnitude higher than the most expensive PaaS.

Volatility and Speculation

You could say that the blockchain cost isn’t such a big deal, as long as people are willing to use the network and pay for transactions. It’s a question of supply and demand, and the demand for blockchain and cryptocurrencies is currently high enough to make it profitable. But this high demand leads to speculation, and therefore the price of computing and storage in a blockchain (any blockchain) is highly volatile.

BTC to USD

Some analyst compare Bitcoin to a Ponzi Scheme, and predict that the market value will collapse once general interest disappears.

If we build a business based on the Ethereum’s blockchain, most of our expenses will be in Ether. If we don’t mine it ourselves, we’ll have to pay for that Ether in real money. But since the USD value of Ether may vary tenfold within a year, our business can move from profitable to worthless in the same timeframe. And we can’t do anything against it. On the other hand, if we mine ourselves, what is currently affordable (running a small server to cover expenses in Ether) might become very expensive once very large mining farms move from Bitcoin to Ethereum.

The high volatility of cryptocurrencies forbids any long-term profitable business built on the blockchain - except speculation.

Slow As Hell

Compared to many other innovations based on computers and networks, the blockchain is very slow. Experts say that you should wait 6 blocks to make sure that a transaction is legit. This means more than 1 minute in Ethereum, or more than 1 hour in Bitcoin.

In a traditional ad server, scheduling an ad takes about 100ms. If you’ve used our ZeroDollarHomepage Ad Server, you probably had a very different experience: Scheduling an ad takes about a minute. The network transport and replication accounts for a small share of that duration ; most of the time is spent waiting for the network to mine the transaction, and add a few more blocks after that. But all in all, the Ethereum blockchain is several orders of magnitude slower than traditional computing.

wait

For end users, every second counts. The Web Performance Optimization trend focuses on improving revenue by earning one or two seconds in download time. Betting on a technology that requires a transaction to be acknowledged by the entire world isn’t the best way to make business.

Free Market and Anarchy

One of the promises of the blockchain is to liberate markets that still require an intermediary. No more lawyers, bankers, or bookmakers. A great opportunity for new businesses?

Except these intermediaries currently report criminal activities to the authorities (governments and law enforcement agencies). If you remove the intermediaries, you also remove the police, and you let criminals proliferate. The first bitcoin application at scale was called The Silk Road. It was an online marketplace for everything illegal: drugs, weapons, child pornography, etc. Not to mention the ability to use bitcoins for tax evasion.

Even the proponents of free market economy recognize that a certain level of regulation is necessary to avoid total chaos. Running a business in a land full of criminals with no police isn’t profitable - unless you’re a criminal, too. For instance, the Mt. Gox Bankrupcy in 2014 cost about $450 million to BitCoin users.

Just like it took a long time for governments to control the Internet (which was, and remains, a haven for criminals), it will take a long time for our lawmakers to control the anarchy unleashed by blockchains. The blockchain may carry the promise of a better future in the long term, but for the near future, you’d better be armed.

Do You Really Need A Blockchain?

A large share of the hype around the blockchain comes from people who don’t really understand its shortcomings. They would probably use another solution is they were better informed. Here are a few bad reasons why you should probably not choose the blockchain technology.

You can use a private blockchain Nearly 80% of the blockchain projects I hear about, especially in finance, are based on private blockchains. This completely defeats the main purpose, which is to get an agreement between non-trusted parties. If a project needs runs on a private blockchain, then only trusted parties can join it, and you don’t have a trust problem. In a trusted network, there are many, many other tools to share a ledger of facts - all much better optimized than the blockchain (for instance: a web service).

It offers a way to reach distributed consensus It does, but only if this consensus can be written as code. For instance, a company working with music rights distribution recently contacted us to build an international platform for artist retribution on the blockchain. Except that when two countries disagree on how to pay right holders, they both have valid contracts. Only a court can decide which contract wins. No smart contract can replace that. You must have clear governance rules that already work before trying to automate them in a blockchain.

It’s secure Asymmetric cryptography is one of the blockchain’s strengths. However, the blockchain technology, just like any other, is safe only until someone finds a vulnerability. It has already happened in the past. The computer science behind the blockchain is so complex that very few developers can contribute or review the code. Consider smart contracts and blockchains as relatively less secure than, say, TSL on the web (through HTTPS). Of, and even if the software works perfectly, it doesn’t prevent fraud. Remember the double spend problem from our first post? It turns out people regularly try that in blockchains (see the latest 200 double spends in the Bitcoin blockchain)

It’s transparent Granted, all transactions are public, and expose location and IP address. But no personal information ever transits - only anonymous hashes. Even the creator of Bitcoin is a mystery. So blockchain transparency doesn’t prevent crime or fraud. Also, transparency is usually an inconvenient for businesses. Are you willing to bet your business on a technology that lets everyone track all your transactions, and exposes your code to hackers?

Data is replicated and safe Sure, but with the least cost effective replication strategy. Amazon S3 replicates every bit of data at least 3 times with 100% uptime, for a fraction of the price. And if you actually need full transaction history, use an event store.

It connects anonymous peers But if it’s only for a shared storage (i.e. if you don’t need fact ordering), then regular peer-to-peer network protocols like BitTorrent are enough.

It’s hip I can’t argue with that: yelling the word “blockchain” out loud is currently a great way to grab an innovation budget. However, many of the shining products that pretend to run on the blockchain are merely powerpoint presentations. Besides, you’ll get better results with many other technologies. Not to mention that the word blockchain also evokes money laundering, tax fraud, and pornography.

If you want to build your business on the blockchain, be certain that you need it, and that it will be really useful for your use case.

Conclusion

Blockchains are a very smart idea, with huge possible implications. But are the current implementations ready to power the disruptive applications of the next decade?

On the technical side, some elementary features are simply not feasible. Blockchains are not efficient enough, not enough developer-friendly, and they give too much power to a small league of extraordinary developers without enough political and economical background.

On the business side, the blockchain is moving too fast, it’s expensive, and often overkill. Costs may vary tenfold for no reason. Building a business on such an unstable platform is incredibly risky.

My take is that we have to wait. The blockchain isn’t ready yet. It needs more maturity, another killer app than a speculation engine, a larger developer community, more ecological and economical responsibility. How long will it take? Maybe a year or two? Nobody can tell that.

To be honest, this conclusion surprised me. Most of the publications about the blockchain suggest the opposite. They say “it’s time”, “don’t miss the train”, or “the giant businesses of the next decade are being built on the blockchain right now”. Maybe they are wrong, or maybe we are wrong. We’ve tried to argument this analysis with strong evidence. If you have a different opinion, please voice your comment below.

We’ll be following the developments in the different blockchain projects closely. Make sure you follow this blog for related news!