Recent articles

“Unclear standards inhibit willingness to invest”

Prof A Sikora (2)An interview with Professor Dr. Axel Sikora, Scientific Director of the Institute of Reliable Embedded Systems and Communication Electronics (ivESK) at the University of Applied Sciences, Offenburg, Deputy Board Member of the Hahn-Schickard Association of Applied Research, Villingen-Schwenningen, and Board member of the M2M Alliance e.V.

Part 1:

Adaptation of M2M solutions can be a slow process. Unclear data protection framework conditions and a lack of cross-industry standards are considered to be some of the reasons. That is why All about M2M discussed M2M standardization with Professor Dr. Axel Sikora, Scientific Director of the Institute of Reliable Embedded Systems and Communication Electronics (ivESK) at the University of Applied Sciences, Offenburg, and Deputy Board Member of the Hahn-Schickard Institute, Villingen-Schwenningen.

Professor Sikora, to what extent would global standards speed up the adaptation of M2M in all areas of life?

At the moment, standards are inhibiting willingness to invest because, in part, they either do not exist or the choice is simply too great. That is why many companies are naturally reluctant and are biding their time to see which standards make the running.

Yet standards are also a prerequisite for horizontal integration of different applications. Practical standards already exist for individual niche areas such as machine monitoring or automatic emergency calls from a connected apartment for elderly people. But if we are talking about M2M solutions in complex scenarios such as a Smart City or a Smart Factory, it is all about horizontal integration of different applications and at present it is really not clear yet which standards will prevail.

Why not?

Initial approaches to these complex solutions do exist, but many of them are not yet sufficiently detailed. They disregard many aspects, especially network and node management, and are therefore not yet ready for practical use. Worse still, they require extensions that have proprietary components.

You say that in part there are too many and in part too few standards. Where are there too many and where too few?

Actually, every application area has a number of approaches. That goes for home automation just as it does for building automation, for industrial automation as for process automation. Take a closer look at all these areas and you will find a handful or two of standardization approaches in each of them.

Who is responsible for standardization, then?

It is strikingly apparent that state or intergovernmental standardization bodies such as the European Telecommunications Standards Institute (ETSI), the Verband der Elektrotechnik Elektronik Informationstechnik (VDE) or the American National Standards Institute (ANSI) are less and less involved. Increasingly, the movers are industry alliances or eco-systems that are set up independently. On the one hand that is to be welcomed because these alliances can act much faster than standardization bodies. On the other, that leads to the establishment of groups of this kind no longer being synchronized. In other words, if you mark on a map which bodies are responsible for which areas you will find a lot of overlapping between different bodies but also areas that are white on the map. It is uncoordinated and makes matters even more complicated.


How international are the alliances or standardization bodies in alignment?

That depends. The 3GPP, which deals with standardization of mobile communications, is a very good example of global cooperation. There is also, however, a large number of activities that are more regional in scope, and that applies both to America and to Europe.

What part do the Europeans play compared with American and Asian players?

The American market, like the Asian market, has a characteristic that the European market does not have: alliances and ecosystems that are shaped by very big players, such as Google or Amazon. In Europe this kind of interaction does not exist. In its place we in Europe have more symmetrical alliances in which smaller and larger SMEs join forces to take standardization forward together.

Do smaller companies and startups stand any chance whatever of playing a part in the development and implementation of standards if Google and Amazon are also on board?

That is a question not just for startups but for the entire European market. Their only chance, as I see it, is to join together in overall bodies and see to it that they head jointly in one direction and make use of the advantages of multilateral discussions rather than working independently of one another.

Does any such organization exist in Europe?

In the area of smart metering, in which we for example are active, there is the Open Metering System OMS Group. It consists of around 60 companies involved in smart metering and smart grids and it does very good standardization work. That point is made by government agencies such as the Federal Office for Information Security (BSI) in connection with the security of smart meter gateways.

Which players should be involved in standardization?

That is a classical conflict of objectives. On the one hand you need as many players as possible round the table in order to take all aspects into consideration and leave no white spots on the map. On the other, the more players are involved, the more difficult standardization becomes. The network operators, equipment manufacturers and, in connection with complex solutions like the Smart City, municipal administrations should of course be represented, but I see above all an increasing need to include service providers in the discussion. When we talk about connected applications we are increasingly talking about changing service models too. Until now the bodies have been to a large extent shaped by hardware and device-centered companies. The service providers which are then to market, operate and service the solutions are relatively seldom represented.

Read more about the technical levels and risks of standardization in the second part of the interview, which will follow shortly.

Comments (1)

Recent comments on this article

  • Ken Figueredo: IoT applications cover a broad range of industry sectors and segments which explains the huge number of standards that claim the IoT label. Professor Sikora points out that standards are a prerequisite for horizontal integration of different applications. This highlights the need for standard IoT services (which sit above connectivity technologies, protocols and proprietary device 'standards') to manage populations of devices and their interaction across different applications (i.e. to ensure cross-silo interoperability). This was the topic of a recent IEEE debate which I have written about here: ETSI, in collaboration with several other international standardization bodies (from China, India, Japan, S. Korea and the USA) have already produced a first release of such a standard through the oneM2M Partnership Project. ...

  • Read all comments and join the discussion.

Narrow Band-IoT – a new standardized cellular technology optimized to enable the Internet of Things

NB-IoT-windrad1To date, many IoT applications have lacked a cellular technology for the “simple” things that is inexpensive, yet provides wide coverage, deep indoor penetration and long battery life. Narrow Band-IoT (NB-IoT) will bridge this gap. In a multi-part series, we will explain exactly how NB-IoT works and when and how the technology can be used.

For M2M solutions in the smart home, short-range wireless technologies like WLAN or Bluetooth might be used. In a smart factory, fixed-line connections such as industrial Ethernet or LAN connect machines over relatively short distances. 2G, 3G, 4G and, soon, 5G mobile networks are the technology of choice wherever large amounts of data are involved, swift responses are required, or a highly available network is needed, such as for remote control of machines, for autonomous cars or to monitor traffic or wind turbines.

NB-IoT as a Low Power, Wide Area (LPWA) technology

NB-IoT as a LPWA technology addresses the ultra-low-end M2M/IoT sector, a mass market, which cannot be efficiently served by existing cellular technologies, as they cannot adequately meet the low data amounts, low power consumption and low cost requirements. Furthermore, this technology has an improved link budget of 20 decibel compared to GSM, allowing for deep indoor penetration, and battery life may be extended massively. Standardized and operating in licensed band, NB-IoT ensures security, stability and reliability for our customers in the future.

NB-IoT for the ultra-low-end IoT market

Many M2M applications generate much smaller amounts of data and do so infrequently with high latency. Yet, this data has to be transmitted over long distances in many use cases such as high value goods tracking and remote gas or water meter reading. There are use cases that require M2M modules that need to function autonomously over long periods of time with extremely low energy consumption and far away from the nearest mobile network mast. If that is the case, deep indoor penetration, strong propagation and long battery life are important. This applies to places that are hard to access such as pipelines or channels.  What all these use cases have in common is, that they require low costs for a massive amount of devices.

Successful test in the Telekom network

Standardization of Narrow Band-IoT has already made significant progress, and Deutsche Telekom is able to influence ongoing developments as a member of the 3GPP initiative. In the fall of 2015, Telekom and the Chinese network supplier Huawei successfully achieved the world’s first implementation of pre-standard NB-IoT on commercial network elements by software upgrade only with a smart parking guidance system. If standardization is agreed on in the course of this year and further trials are successful, first commercial applications are expected to emerge from 2017 onwards.

Comments (0)

Recent comments on this article

Gateways build communication bridges

Gateways3To enable different manufacturers’ terminal devices to communicate with each other on the Internet of Things, translators are required. Gateways perform this task, orchestrating data interchange between Internet-enabled things – be they production machines on a factory shop floor, sensors in central heating systems, or security cameras.

The Internet of Things (IoT) is picking up speed. In the consumer sector, growth is already very dynamic due to connected personal devices or sensors around the home, or so the pundits say. On the industrial Internet of Things, in contrast, the full-scale success story has yet to kick in, analysts claim. The potential for business benefits as a result of greater efficiency is convincing.

Terminal devices speak many languages

Not all devices that will be integrated into the IoT in the medium and long term are able to share data with the Net directly and without problems. They may lack a suitable interface or their operating system may be a proprietary one. This lack of compatibility impedes the smooth flow of data that is so important for the IoT. One solution approach is to use gateways that build a bridge to the Internet of Things with its many and varied applications for terminal devices of this kind.

Gateways make a material contribution toward eliminating connectivity problems that can occur between individual devices and the IoT. The key to the solution is the fact that the IoT gateways on the market support different communication paths such as Modbus to GSM. In this way different devices can be coupled in a wide range of variations without problems. To do so they collect data from different sources and put them on the Internet. Users of individual components do not need to deal with the complexity of data interchange or to accept high costs for a high-speed interface to connect them with the IoT. The gateway establishes the necessary connection.

Connection made easy

Subject to the application, there are different ways to implement an IoT gateway in a given environment. At present, simple or embedded gateways are used. Both can receive data from any number of distributed stations and make it available on the IoT.

Simple gateways sort and packetize data for sending via the Internet. They also ensure that data is returned to its starting point if an application requires that to be done. The gateway performs the role of a bridge across which different kinds of data travel, being fed in via different communication interfaces and converted for transmission using different protocols.

Gateways can connect LANs with the Internet with a minimum of effort and expense, thereby making simple end-to-end solutions possible. In the Smart Home, for example, components that have no Internet access of their own can be incorporated in this way.

Embedded gateways with built-in intelligence

Embedded gateways provide similar functionalities. In addition to data transmission options, however, they can process local applications directly, which speeds up processes and makes real-time applications possible.

An embedded gateway can, for example, filter sensor data while at the same time performing sophisticated management tasks. Critical situations are recognized automatically. The system can then trigger an automatic alarm and send it over the network to trigger appropriate action by someone in a position of responsibility.

IoT technology is already in use in production environments and power networks and to monitor construction sites. Especially for devices installed at locations where conditions are rough or that are unmonitored or remote, the gateway establishes a reliable communication bridge between the terminal device in question and the company’s central server. These are situations in which the use of embedded IoT gateways is especially appropriate because they are robust and reliable and can deliver around the clock the very data that is required in the IoT environment.

Open architecture connects

A building block for swift realization of the Internet of Things is the hyperconnecting architecture on which the leading players in the IoT market rely. Based on open standards, it enables users to access the information they require anytime, anywhere. Hyperconnecting supports a large number of protocols, which is why, for example, sensor aggregation with several wireless protocols, including Wi-Fi, Bluetooth Low Energy (BLE), and ZigBee, can be realized.

Scalability is also simplified across different hardware platforms such as ARM and Intel architectures. In this kind of environment, unproblematic coexistence of C and Java development with open programming interfaces (APIs) and dynamic components is possible. Another feature is flexible messaging, which enables swift and seamless transmission of information.

Comments (0)

Recent comments on this article

Data protection on the Internet of Things: Interview with Dr. Volker Lüdemann

Connected Car5We need a data protection authority along the lines of the food inspection agency, says Dr. Volker Lüdemann, Professor of Commercial and Competition Law at the University of Osnabrück and since November 2014 chairman of the university’s Ethics Commission, in an interview with All about M2M. On the Internet of Things everything can communicate with everything; it connects the physical world with the world of information. In sharing sensor data smart devices chat incessantly and imperceptibly about users’ behavior.

Professor Lüdemann, what is Osnabrück University’s data protection expert currently working on?

A major concern of my research in this area is the connected car and autonomous driving. For Germany as a country with an automotive tradition the connected car is the supreme discipline, as it were. Alongside the smart home it is set to become the second-largest application area with a market volume of up to  €200 billion in the years to come.

An impressive forecast, and that is precisely why many people are wondering about data protection. Do we really run a risk of becoming transparent citizens?

If we let things carry on as they are, we certainly do. People are nowhere near adequately aware of the need for data protection. Digitization and being connected are fundamentally changing our communication situation. On the Internet of Things we may not necessarily be a part of the communication but machines are automatically sharing information about users, constantly and, for the moment, imperceptibly. In a modern midrange car, for example, around 80 control devices with sensors are connected as part of its security systems. The car leaves a data trail. In itself this data is not especially informative, but its potential lies in connecting and evaluating it. Those who have access to it can put it to commercial use, be they Google, carmakers, or software manufacturers. We are in the process of charting the course for the future in that our data is the oil of the 21st century.

A fine comparison, but why is data such an important raw material?

Precisely because it is raw. On the Internet of Things, data is generated in bulk and is practically free from manipulation. That is what is really new. Take electricity meters, for example. In the past, the meter reader called round once a year and made a note of the reading. Today a smart meter can deliver information in much greater detail. It transmits 31.5 million datasets in the course of a year. Evaluated, this data reveals a precise user profile because nobody can influence his data over such a long period, and if data of this kind is available from a whole lot of users and is crosslinked and joined by data about traffic flows and visits to shops or hospitals, I suddenly know how an entire city functions.

Both light and shadows, then. We are delighted with predictive car maintenance and reliable traffic congestion reports or with fitness trackers that measure our heart rate and warn us of the risk of a heart attack. Experience with the smartphone shows that what is practical, comfortable and profitable will prevail. What must business and the state now do?

Clearly we all want to use the Internet of Things and it has many advantages. The task the state now faces is to balance the advantages and disadvantages. There is little the individual can do, especially as data is collected for the most part entirely unnoticed. Now is the time for the state to step in. In Germany there is a fundamental right to informational self-determination. The state is duty bound to set up a legal and supervisory system to ensure that this right is upheld. My solution approach is that we need a powerful data protection authority that ensures a basic level of security in much the same way as the food inspection authorities do. Consumers lack the knowledge and, above all, the opportunity to do so themselves. In future, all connected things should be pre-set at a basic level from which the user can only deviate explicitly.

How might personal data be protected on a practical, day-to-day basis? Must I give my explicit consent in future to my data being transmitted and evaluated before making every journey in a connected car?

The automakers are very open-minded in this debate. For them, data protection has become a sales argument. Take company cars, for example. The car is registered as owned by the employer and is insured on the basis of the user’s driving behavior. In other words, the vehicle records data of all kinds about destinations, style of driving, and frequency of breaks. In the works setting this data would stay in a protected area, but if the employer has changed the setting and, say, additional data is collected, a warning light is switched on in the car and the driver has the option of deliberately adjusting his behavior accordingly. It is much the same as a warning about video surveillance.

eCall will be mandatory for all new cars in the EU from March 31, 2018. You call it a Trojan horse. Where is the problem with the automatic electronic emergency call?

In principle eCall is great, but a closer look at the legislation reveals that the project would never have been launched solely to optimize the rescue of accident victims. In principle there are two versions. The statutory system is totally unproblematic in terms of data protection law. It lies dormant until the airbags are activated. It then transmits the requisite dataset, the emergency services are notified, and the rescue chain is set in motion. The carmaker, however, can install a system of his own and deactivate the statutory system. This system is totally unregulated, it relays all data continuously and is, as an open Internet interface, the killer application for the automobile industry. Where is the automatic emergency call sent? Which rescue service is notified and which hospital does it go to? Which recovery service comes to tow the damaged vehicle to which car repair shop? This decision might in future lie in the hands of those who receive the data and can evaluate it.

The agreed version of the EU’s Data Protection Regulation will be submitted to the Council of Ministers for approval on April 21 and is scheduled to come into force at the beginning of 2018. Which innovations or improvements will it bring in terms of data protection?

At first glance the regulation reads well, but the devil is in the detail of its implementation by EU member-states. My impression so far is that the innovations are not much of an improvement and that there will be practically no perceptible changes for the general public. The regulation deals mainly with serious penalties for breaches of data protection and with collaboration with international data protection authorities. It fails to tackle the fundamental  issues. Internationally, the preconditions vary widely. In the United States, for example, and in the UK’s legal tradition data protection is not a constitutional right. Initially, everything is permitted there and restrictions may be imposed later. That is why agreements such as Safe Harbor and now the Privacy Shield are so hard to balance out.

Autonomous driving. Where are we heading there? How am I to imagine the future of the automobile?

Autonomous driving is no longer a remote and distant prospect. The collision of a Google car with a bus a few days ago made that clear. The Google Car may not yet be better than a human driver but the test driver on board the Google car sized up the situation just as wrongly as the car and failed to intervene. And along with driving as such, allied business models will change fundamentally. In a few decades’ time owning a car of your own will no longer play the role that it does today. Thanks to autonomous driving the road user may have what might be called a mobility subscription. As in car sharing he may book an SUV or a convertible as required. The car will drive up at the specified time and will drive off again at the end of the journey. Vehicles may all have the same level of motorization and drive bumper to bumper. In view of traffic planning and environmental pollution in megacities that would be a conceivable scenario. The car as such would no longer be a status symbol and an expression of one’s individual personality. The interior would be much more important because you no longer have anything to do with the driving. The car will become a platform for infotainment and entertainment, so in the future there might be an Apple Car or a Samsung Car, etc. The carmakers are aware of this trend and are developing new business models. But if they are to become mobility service providers they will definitely need access to the driving data.

As chair of Osnabrück University’s Ethics Commission do you now see autonomous driving from a further angle?

The latest development in this field in the United States is that computers are now authorized to drive vehicles there. The precondition was that computers must demonstrably drive better than people. Ethical aspects are a major problem here. How is the car to be programmed in case of doubt? To protect its passengers or to protect the outside world? In the United States quantifiable legal ethics applies and ten human lives count for more than one. In Europe human life is the supreme value in itself. The discussion is ongoing but for Europe there is no solution as yet. Here too the legal framework must now be established.

Volker Lüdemann is Professor of Commercial and Competition Law at the University of Osnabrück, Scientific Director of the Niedersächsisches Datenschutzzentrum and since November 2014 has chaired the university’s Ethics Commission. A qualified lawyer, he was previously an authorized officer at Volkswagen Versicherungsdienst GmbH in Wolfsburg.

Comments (0)

Recent comments on this article

IoT Day: Long Live Simplicity

IoT Day-imageOn April 9 the IoT Council is holding another IoT Day. There will be events all over the world at which the community will discuss the Internet of Things and its implications for the economy, society, and culture. When the IoT Council planned the day back in 2011, entry into the world of IoT was difficult. Fortunately, the hurdles are now lower. But there is still plenty of work ahead.

A few years ago, implementing an IoT solution was like solving a puzzle. You needed to find matching parts from different suppliers and assemble them into a working solution. What sounds simple needed extensive know-how in software development, hardware design, network communication, and system integration. Today most vendors offer complete solutions from a single source, and installation is almost plug and play. Making things talk has never been easier for customers.

With the “Cloud of Things” Deutsche Telekom, for example, offers a “Swiss army knife” for the Internet of Things. The platform helps companies like the German forklift truck manufacturer Hubtex to easily connect its machines to the cloud and ensures that it can manage its machines from anywhere at any time. The platform also solves another problem. It converts specific device information and measurements into a cross-device format, thus providing a common understanding for the connected devices – among themselves and with the customer’s IT. A few years ago that was only possible within closed, proprietary ecosystems.

Entry at moderate cost

Contrary to what you might expect, it isn’t too expensive. Fees charged for the Cloud of Things, for example, are based on the number of connected devices. You can book extra resources flexibly as required at any given time. Furthermore, the prices of modules, sensors and actuators have fallen significantly. Coming from this base you can launch a pilot project without major investment and grow it to an entirely connected enterprise.
While entry to the Internet of Things has become easier for customers, the situation for suppliers of complete solutions has become more complex. The range of components has practically exploded in recent years. In some cases this rapid development has filled important gaps as, for example, in the case of Narrow Band IoT (NB-IoT). This innovative cellular technology comes into play where conventional access networks are uneconomical or simply do not meet the requirements of the application. It is particularly suitable for applications that require low power, low cost devices with wide area coverage and deep indoor penetration. It enables vendors to meet the exact demands of their customers.

Challenges ahead

But challenges still lie ahead. From a technical viewpoint the lack of standardization and interoperability is still an unnecessary barrier within the Internet of Things. News headlines about hacked IoT devices, surveillance concerns and privacy fears are increasingly alarming the general public and discouraging those who would like to start implementing the IoT in their businesses. At the same time there is a lack of legal and regulatory frameworks.

Of course this is only a small part of development over the past few years. But since it’s IoT Day we are not only aiming to share our point of view but are also interested in your thoughts. What is much simpler now than it was a few years ago? Where do we need to reduce complexity? Share your thoughts in the comment section below.

Comments (0)

Recent comments on this article