Computer Science | Miklas Njor
Multimedia content are complex objects, with sounds, audio, subtitles, moving and ever changing images. © James Nash -

The Big Picture – Multimedia Ontologies and MPEG-7 (part 1 of 2)

Multimedia content are complex objects, © James Nash -

Multimedia content are complex objects, with sounds, audio, subtitles, moving and ever changing images. © James Nash –

As more and more multimedia content is digitalised and added to the Web or Digital Libraries, the need for ontologies to connect the meaning and relationships of the pictured objects increases. But how are we to connect the dots, when the dots are difficult to see, let alone interpret. This can however be overcome by ontology matching, a “bridging of the semantic gap by matching a multimedia ontology against a common-sense knowledge resource” (James, Todorov & Hudelot, 2005).

Multimedia content are complex objects, with sounds, audio, subtitles, moving and ever changing images. Which language are the actors speaking, are they speaking the same language, and are the subtitles in yet another tongue? Is what’s being said relevant or just background chatter?

MPEG-7 is the defacto container for ontologies, and mapping and conversion between ontologies and various syntax is important. It is obvious that there are many challenges when it comes to creating ontologies for multimedia, with the wealth of competing metadata formats and standards, and heterogeneous ontologies being the largest hurdles to overcome. As the amount of multimedia grows, so does the need for a fix-all solution.

Creating yet another standard to bridge the semantic gap, will not solve the problems of annotating multimedia content, since the content quickly becomes too complex and the tools to extract and reason the ontologies vary in quality. The interoperability issues will continue to exist and so will the problem of which levels of granularity and abstraction used to describe the multimedia content is best. Automated low-level interpretation of what’s going on in the media file has become easier to annotate and decipher by machines, but high level descriptions still remain a challenge. How do we define instances of relevance? Read on for an overview of multimedia ontologies that use the MPEG-7 standard.

What are Multimedia Ontologies

 Russion Dolls

Ontologies and multimedia are like russian dolls with seemingly identical containers containing various levels of metadata, mirroring each other. © James Lee –

Ontologies and multimedia are like russian dolls stuck in a kaleidoscope. Seemingly identical containers containing various levels of metadata, mirroring each other. When working with ontologies and multimedia, the object itself is not the only thing referenced, but, also, the content itself needs description. The “aboutness” of a file is embedded in the metadata, however, problems on several levels are likely to arise, since computers don’t see, they read, making automatic creation of ontologies difficult.

When it comes to semantic interpretation, the ontology is in the eye of the beholder. And the bridge crossing the semantic gap of multimedia has a lot of eyes starring at it trying to fix the problems. Werner Bailer (Bailer, 2011) points out three main problems:

  • Integrating Different Standards
  • Lack of Formal Semantics
  • Deployment of Multimedia Metadata

Why MPEG-7 looks like a winner

MPEG-7 (also known as the Multimedia Content Description Interface) is a ISO/IEC standard. Developed in 2002 by the Moving Picture Experts Group (MPEG) as a tool to deal with not only metadata but also the description of structural and semantic content. MPEG-7 defines multimedia Descriptors (Ds), Description Schemes (DSs) and their relationships. The Description Schemes group the Descriptors (visual, texture, camera motion, audio, actors, places, semantics). For low-level descriptors, the annotation is often done automatically and for high-level descriptors, the annotation is manual. The Description Definition Language (DDL) ties the knot and forms the core part of the standard. DDL is written using XML Schema (Troncy, Celma, Little, García & Tsinaraki, 2007).

What MPEG-7 solves

The MPEG-7 standard describes low level features e.g. texture, camera motion or audio/melody, while the description schemas are metadata structures for capturing and annotating audio-visual content in a more abstract way with descriptors using Description Definition Language (DDL). MPEG-7 is defined as an XML schema with defines 1182 elements, 417 attributes and 377 complex types. Without any formal semantics this can cause interoperability issues when extracting or entering data.

What MPEG-7 doesn’t solve

Unlike domain ontology objects, multimedia objects often feature juxtaposed items present at the same time who’s complex relationships are to be mapped. As such, missing semantic descriptions of concepts appearing in multimedia objects can result in ambiguous and inconsistent descriptions, one of the main hurdles for MPEG-7.

Interoperability issues

Complexity Science

In general usage, complexity tends to be used to characterize something with many parts in intricate arrangement. The study of these complex linkages is the main goal of complex systems theory.

The reason MPEG-7 is often cited as the best standard for multimedia is it‘s level of granularity and levels of abstraction. As Hiranmay Ghosh (Ghosh, 2010) points out: “The goal of a multimedia ontology is to Semantically integrate distributed heterogeneous media collections (bridging the gap) and integrate multiple media types.” But there is a downside too. The MPEG-7 Schema defines more than a thoussand elements, half as many attributes and complex types. Without any formal semantics this can cause interoperability issues when extracting or entering data. The interoperability issues and complexity can, according to Troncy, Celma, Little, García & Tsinaraki, also be experienced as a burden.

Why the need for interoperability

There exists many types of metadata and metadata standards, data types and applications which process the file formats. As mentioned above, Werner Bailer points out three main problems, which we look a little closer at below.

Integrating different standards

A multimedia file‘s life cycle can be very complex, with many people dealing with the file at the various stages from production to finished product and use. There is no ideal standard which covers work-scenarios. Structural descriptions and low-level audiovisual features in MPEG-7 work well for some standards, but might not fit other standards, and the concepts of objects are written in RDF/OWL, which for some standards there exists none or limited tools for reasoning.

Lack of Formal Semantics

Semantic elements are far from always properly defined and there are many alternatives to model the same descriptions. This makes it difficult to validate and understood by all software. A way to solve ambiguities could be to use a limited set of description tools.

Deployment of Multimedia Metadata

Many metadata formats exist and the metadata pertaining to the file is often published alongside the multimedia piece itself. This makes it difficult to process automatically and the results can therefor be unreliable. Bailer concludes that: “Semantic technologies are not optimal for all types of data and there are limitations w.r.t. Scalability”

Requirements for a Multimedia Ontology

 Battel of ontologies

May the best ontology win. © Yutaka Seki –

When Arndt et al. set out to create their multimedia ontology (COMM) (Arndt et al., 2007), the authors defined six requirements for designing a multimedia ontology:

  • MPEG-7 Compliance: as this is the standard used worldwide by the broadcasting community.
  • Semantic Interoperability: sufficiently explicitly described. Ensuring that the intended meaning can be shared amongst different systems.
  • Syntactic Interoperability: An agreed-upon syntax e.g. OWL, RDF/XML or RDFa.
  • Separation of Concerns: Clear separation of administrative and descriptive labeling.
  • Modularity: Minimise the execution overhead.
  • Extensibility: The underlying model and assumptions should always be stable and ensure that new concepts can be added to the ontology without clashing with older models.

Suárez-Figueroa, Ghislain & Corcho review the most well-known and used ontologies in the multimedia domain from 2001 to 2013, based on free available RDF(s) or OWL, and present a framework: FRAMECOMMON (Suárez-Figueroa, Ghislain & Corcho, 2013). The authors highlight three criteria to look out for when developing or deciding on a multimedia ontology, namely:

  • Which multimedia dimensions (audio-visual, image, video etc) are covered by the ontology.
  • Documentation and code quality, and how easy is the ontology to pick up and use.
  • Is the ontology trustworthy and free of irregularities.

Also Dasiopoulou et al. highlights conceptual clarity and well-defined semantic models, and point out, that much of the metadata and semantics from multimedia products remain tucked away from the semantic web, due to scalability problems of representation and the capturing of contextual information (Dasiopoulou et al., 2010).

A list of Multimedia Ontologies

The list of Multimedia Ontologies contiues here


Ontologies and multimedia are like russian dolls with seemingly identical containers containing various levels of metadata, mirroring each other.  © James Lee -

The Big Picture – A list of Multimedia Ontologies for MPEG-7 (part 2 of 2)

A list of Multimedia Ontologies for MPEG-7

This is a continuation of the post about what multimedia ontologies are, and the requirements one should try to apply.

On the basis of the requirements for multimedia ontologies, here follows a list of ontologies for describing MPEG-7 multimedia and multimedia content.

M3O – Multimedia Metadata Ontology

M3O sets out to solve the problem of too narrow metadata and specific media types, which cannot be used in conjunction to describe multimedia presentations. Unlike existing metadata models, the M3O is not locked into particular media types and allows for integrating the features of the different models and standards we find today (Dasiopoulou et al., 2010).

M30 is based on DOLCE+DnS Ultralight and uses a generic modelling framework to represent multimedia metadata, by adopting existing metadata models and metadata standards. The M3O is based on five Ontology Design Patterns (ODP) representing data structures:

  • Identification of resource.
  • Separation of information objects and realisations.
  • Annotation of information objects and realisations.
  • Decomposition of information objects and realisations.
  • Representation of provenance information.

M3O is aligned with COMM (see below), MRO (see below) and EXIF , and the ontology is targeted multimedia presentations on the web. M3O has medium quality documentation and high code clarity. It’s drawbacks are missing options for annotations (Suárez-Figueroa, Ghislain & Corcho, 2013).

Harmony MPEG-7 based ontology

Translations of MPEG-7 definitions follow the original MPEG-7 schema, where content and segments are modelled as classes. Entities can have more than one semantic interpretation. However, this leads to ambiguities when interpretations travel to other parts of the ontology (Dasiopoulou et al., 2010).

MRO – Media Resource Ontology

Developed by the W3C Media Annotation Working Group, the MRO defines a set of minimal annotation properties for describing multimedia content together with a set of mappings between the 23 main metadata formats in use on the Web (IPTC, MPEG-7, XMP, Dublin Core, EXIF 2.2, DIG35, Media RSS, TV-Anytime and YouTube API Protocol, among others).

MRO maps multimedia metadata to ontology elements describing: identification, content description, relational, copyright, distribution, parts, and technical properties. It has strong interoperability among many metadata formats, along with ontology properties describing media resources. MRO is used for annotation and analysis and has high quality documentation and high code clarity. MRO drawbacks are missing options to create annotations (Suárez-Figueroa, Ghislain & Corcho, 2013).

COMM – Core Ontology for Multimedia

The intention of COMM is to ease multimedia annotation and solve the formal properties (defined by the creators of COMM) of a high quality multimedia ontology: MPEG-7 compliancy, semantic interoperability, syntactic interoperability, separation of concerns, modularity and extendability. COMM is used for annotation, has a modular design which facilitates extensibility with other ontologies and the ontology is based on DOLCE and implemented in OWL DL. It uses design patterns for contextualisation called Descriptions and Situations (DnS) and information objects called Ontology for Information Object (OIO) (Suárez-Figueroa, Ghislain & Corcho, 2013). The four design patterns are:

(Dasiopoulou et al., 2010)

COMM has high quality of documentation and code clarity. It’s main drawbacks are the missing options to create disjoints, or setting domain or range properties (Suárez-Figueroa, Ghislain & Corcho, 2013). COMM covers structural, localisation and media description schemas, as well as low-level descriptors of the visual part with room for information about algorithms and parameters to extract descriptions (Dasiopoulou et al., 2010).

  • OWL DL ontology.
  • Designed manually.
  • Based on the foundational ontology DOLCE.
  • Viewed using Protege and validated using Fact++-v1.1.5.
  • Upper-level ontology providing a vocabulary independent of domain independent and explicitly includes formal definitions of foundational categories.
  • Eases linkage of domain-specific ontologies because of the definition of top level concepts.
  • Covers the most important parts of MPEG-7 used for describing structure and content.

Hunter’s MPEG-7 Ontology

  • Extended and harmonised using the ABC upper ontology for applications in the digital libraries and eResearch fields.
  • OWL Full ontology containing classes defining media types and decompositions from the MPEG-7 Multimedia Description Schemes.
  • Can be viewed in Protégé and validated using the WonderWeb OWL Validator. Used for describing decomposition of images and their visual descriptors.
  • For use in larger semantic frameworks.
  • The ability to query abstract concepts is a result of being harmonised with upper ontologies such as ABC.

The following ontologies (MPEG-7 upper MDS, MPEG-7 Tsinakari, MSO and VDO, and MPEG- 7 Rhizomik) are the results of transforming the MPEG-7 standard to ontology languages based on a monolithic design.

MPEG-7 upper MDS

The aim of MPEG-7 upper MDS ontology is reuse by other parties for exchanging multimedia content through MPEG-7 by using the upper part of Multimedia Description Scheme (MDS) of the MPEG-7 standard. It is used for annotation and analysis and uses OWL-Full. The MPEG-7 upper MDS has low quality of documentation and low code clarity. It’s main drawbacks are missing options to assert inverse relationships (Suárez-Figueroa, Ghislain & Corcho, 2013).

MPEG-7 Tsinaraki

Built using OWL DL, MPEG-7 Tsinaraki spans MPEG-7 MDS and the classification schemes and parts of the MPEG-7 Visual and Audio Parts. It is used for annotation, retrieval and filtering for Digital Libraries and has low quality of documentation and medium code clarity. It’s main drawback is that it uses different naming criteria (Suárez-Figueroa, Ghislain & Corcho, 2013).

  • Written in OWL DL and captures the semantics of the MPEG-7 MDS (Multimedia Description Schemes) and the Classification Schemes.
  • Visualised with GraphOnto or Protégé. Validated and classified with the WonderWeb OWL Validator.
  • Integrated with OWL domain ontologies for football and Formula 1.
  • Used in many applications, including audiovisual digital libraries and e-learning. The XML Schema simple data-types defined in MPEG-7 are stored in a separate XML Schema to be imported in the DS-MIRF ontology.
  • XML elements are generally kept in the rdf:IDs of the corresponding OWL entities, except when two different XML Schema constructs have the same names.
  • The mapping ontology also captures the semantics of the XML Schemas that cannot be mapped to OWL constructs making it easy to return to the original MPEG-7 description from the RDF metadata.
  • Original XML Schema is converted into a main OWL DL ontology and a OWL DL mapping ontology keeps trace of the constructs mapped allowing for conversions later on.

MSO – Multimedia Structure Ontology

The aim of MSO is to support audiovisual content analysis and object/event recognition, to create knowledge beyond object and scene recognition through reasoning, and to enable user-friendly and intelligent search and retrieval. MSO covers MPEG-7 MDS and combines high level domain concepts and low level multimedia descriptions, enabling new content analysis.

The purpose of many tools using MSO is to automatically analyse content, create new metadata and support intelligent content search and information retrieval. MSO has medium quality documentation and high code clarity. It’s reliability pitfalls lie in the difficulty of merging concepts in the same class, and missing options for creating disjoints (Suárez-Figueroa, Ghislain & Corcho, 2013).

MSO largely follows the Harmony Ontology. However, in order to map explicitly the multiple interpretations the attributes in MPEG-7 come with, for instance for mapping and differentiating between frames and keyframes (which help in prioritising what to search for), MSO introduces new classes and properties not present in the Harmony Ontology. MSO, unlike Harmony, modulates structural and low-level descriptions, splitting definitions into two ontologies, making it easier to model domain specific ontologies by linking them together (Dasiopoulou et al., 2010).

VDO – Visual Descriptor Ontology

Although labelled as a visual ontology and not specifically for multimedia, VDO (available in RDF(s) and uniformed to DOLCE) uses the MPEG-7 standard for automatic semantic multimedia analysis of multimedia content similar to MSO. VDO has high quality documentation and medium code clarity, with some reliability pitfalls with merging concepts in the same class and no options for creating annotations (Suárez-Figueroa, Ghislain & Corcho, 2013).

MPEG-7 Rhizomik

The MPEG-7 Rhizomik ontology – in contrast to MSO/VDO/Harmony – assist in automatically translating the MPEG-7 standard to OWL via XSD2OWL and RDF2OWL mappings, and covers the complete MPEG-7 standard. Although generally good for automation, it is regarded as challenging to connect it to domain ontologies, and dealing with naming conflicts.

Easy linkage to domain ontologies makes MPEG-7 Rhizomik dovetail well with Semantic DS’s, however this is difficult due to opposing ontologies naming criteria. MPEG-7 Rhizomik’s strict conceptualisation model requires remapping of existing definitions to merge with the MPEG-7 model (Dasiopoulou et al., 2010).

MPEG-7 Rhizomik has low quality of documentation and low code clarity. It’s main drawbacks are domain or range in properties; usage of different naming criteria; same URI for different ontology elements; difficulty in merging concepts in the same class (Suárez-Figueroa, Ghislain & Corcho, 2013).

  • Maps XML Schema constructs to OWL constructs following generic XML Schema to OWL together with an XML to RDF conversion.
  • Covers the whole standard and the Classification Schemes and TV Anytime. Visualised with Protégé or Swoop and validated/classified using the WonderWeb OWL Validator and Pellet.
  • Corresponding elements are both defined as containers of complex types and simple types. Automatically mapping of XML Schemas to OWL ontologies via ReDeFer.
  • Used with other large XML Schemas in the Digital Rights Management domain like MPEG-21, ODRL and the E-Business domain.


There have been many more attempts by the academic world to crack the multimedia ontology nut and this has created more or less heterogenous solutions. We have in this article given a quick overview of the majority of ontologies based on the MPEG-7 standard, listing their main features, technical properties and drawbacks.

There exist many types of metadata and metadata standards, but MPEG-7 is the most widespread multimedia standard, although it too is constrained by the use of XML and the interoperability problems this presents when mapping syntactic data and semantics, i.e. a lack of standardised correspondence between XML schema for definitions and RDF Semantic Web languages.

We find that many ontologies have been built to bridge the semantic gap, but no single ontology fits all scenarios or formats. Semantic elements are far from always properly defined and many alternative ways to model the same descriptions exist, making it difficult to validate or make ontologies understandable by all software. A way to solve ambiguities could be to use a limited set of description tools.

Ontologies wanting to provide full interoperability will have to providing full coverage of the MPEG-7 features leading to flexible structures, whereas ontologies for reasoning will have to enforce a more rigorous structure, which can become inflexible. It is also worth noting that if metadata is expected to carry semantics this could lead to verbose and large files, which in turn can make the information redundant.

Nonetheless, the COMM ontology highlights the significance of formally founded standardised description models and shows promising results by using a modular multimedia ontology based on an upper ontology (DOLCE), making it extensible and easy to integrate with domain ontologies.







Faceted Search

{ IPTC:Headline | }

Security and Privacy in the Internet of Things

Abstract: The aim of this review report is to gain a broad understanding of privacy and security in IoT and the problems and open issues concerning this area.


Internet of Things (IoT) use mainly Wireless Sensor Networks (WSN) or Radio Frequency IDentification (RFID) to communicate and connect to the outside physical world. IoT, and WSN and RFID technologies are regarded by many researchers as insecure and still partly in the development stages. The key challenges for making IoT more widespread is adding better security between the layers of the IoT devices, and when communicating with the outside world.

The security aspect will help in dealing with the privacy aspect which is equally important, since users have to be able to trust that the data the IoT device collects, are not leaked to unauthorised parties. IoT is built upon the idea of the Internet, however IoT is a more challenging area to secure than the Internet, since IoT devices have limited resources.

mind map about mobile

A mind map of the central idea where thinking about mobile isn’t just “thinking about mobile devices” but also technologies, ideas and approaches. What difference does mobile make to user experience? How do we deal with interfaces which aren’t any longer about screens? What are the privacy implications of crowd sensing?
© Mike

Literature Review

We have searched for literature using Malmö University’s Summon and Google Scholar. The search terms used are IoT“, “Internet of Things“, “privacy“, “security”, “survey”, “state of the art either as single terms or in combination. We have accessed and read abstracts of some hundred papers, downloaded about 30 papers of which we find seven papers to be relevant to our aim of getting an overview of the domain of security and privacy in IoT, and where it is heading. Thus our focus for the chosen papers are on surveys, reviews and state of the art.


Here we present and discuss the papers we find relevant to privacy and security in IoT.

Internet of Things Architecture and Security

A discussion and review of the current research on security requirements of IoT based on the four layers of the IoT technology (Perceptual, Network, Support and Application Layer) is presented by Suo, Wan, Zou, & Liu [1]. The authors highlight security in IoT as more challenging than security on Internet, since it is difficult to verify that devices have been breached, and that the research community should pay more attention to confidentiality, integrity and authenticity of data.

There are four levels of an IoT application:

Below we describe each layer, their security features and security requirements using definitions by Suo, Wan, Zou, & Liu [1].

The Perceptual Layer

  • Description: Gathers data from equipment (RFID readers, GPS sensors, etc.) it is attached to. The data can be of such as a devices’ geo-position data or surrounding temperature etc.
  • Security Features: Access to storage and power is limited, thus it is difficult to set up protection or monitoring security breaches.
  • Security Requirements: To deal with authentication, the authors highlight cryptographic algorithms and cryptographic protocols with a small foot-print.

The Network Layer

IoT Functional, Usable, Wearable

Panel: The Internet of Things Revolution – Functional, Usable, Wearable (AppsWorld London Notes) Notes from the AppsWorld Europe 2013 panel “The Internet of Things Revolution – Functional, Usable, Wearable” with Tamara Roukaerts, Saverio Romeo, Paul Lee, Ben Moir and Mike Barlow.
© Mike Barlow

The Support Layer

  • Description: The Support Layer deals with data processing and decision-making based on the collected information. The layer also unites the Network Layer and the Application Layer.
  • Security Features: Difficulties lie in actually knowing whether the data being processed is valid input or a virus.
  • Security Requirements: Anti-virus protection, encryption algorithms and encryption protocols with a small foot-print.

The Application Layer

what mobile really means

Activate the world (or: what “mobile” really means)

  • Description: The Application Layer is the out most layer facing the users of the IoT device or service, and will often feature some kind of user interface.
  • Security Features: Controlling who has access to the device’s data and which parts of the data, and to whom the device is allowed to share the data with.
  • Security Requirements: Access authentication to protect user privacy and education of users about password management.

Using two case studies of smart homes and medical implants, Kermani, Zhang, Raghunathan, & Jha [2] methodically highlight the problematic areas of embedded systems, how they can be exploited, and further describe possible solutions and workarounds for better hardware and software security for IoT devices.

IoT challenges and opportunities



A good historical background of Internet of Things and definition of “thing” is discussed by Agrawal & Das [3], where the authors explain the underlying technologies (WSN and RFID) and pick at the security and privacy concerns and problems of these technologies, as well as the interoperability issues of trust and heterogeneous sources communicating. The authors list many challenges and opportunities for Internet of Things. We acknowledge that the elements are highly connected, however we choose to only highlight and comment on challenges and opportunities of security and privacy in IoT.

Security and privacy challenges

Example of Cryptography

Cryptography is the practice and study of techniques for secure communication in the presence of third parties. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering.

The challenges regarding security and privacy highlighted by [3] are:

  • Standards: Mass IoT rollout requires standardisation of many elements.
  • Privacy: Securing user-device security.
  • Identification and Authentication: Privacy control via authentication.
  • Security: Device communication and inter-communication must be secure.
  • Trust and Ownership: User-trust in collected data.

Security and privacy opportunities

The opportunities regarding security and privacy highlighted by [3] are:

  • Insecure and not Secure: Security software vendors will have an entirely new area to safeguard, however IoT security is complex to manage.
  • Reachability: Ipv6 addresses on every element will make every device reachable, if standards are in place to secure interoperability.
  • Efficiency: Tied to Reachability above, where devices sense and communicate with their surroundings, to help with logistics, tracking and management of data.

Internet of Things and standardisation

Carrying four RFIDs

Carrying four RFIDs

The security perspective of IoT from a standardisation point of view, is argued by Keoh, Kumar & Tschofenig [4], methodically mapping problems facing IoT security to how they can be – and in many ways already are – solved by standardisation. They highlight the efforts of the Internet Engineering Task Force to standardise security within the IoT. Although slightly biased towards their own achievements, they thoroughly examine, evaluate and analyse many problems and levels of security. The also conclude by adding perspectives to Moore’s law and the problem of many new devices’ high power consumption.

Internet of Things contrasted to Internet

Radio-frequency identification (RFID) is the wireless non-contact use of radio-frequency electromagnetic fields to transfer data, for the purposes of automatically identifying and tracking tags attached to objects. Some tags require no battery and are powered and read at short ranges via magnetic fields. Others use a local power source and emit radio waves (electromagnetic radiation at radio frequencies).

The analysis of the security aspects of each layer in IoT objects, their cross layer issues with heterogeneous integration and the security aspects of IoT is addressed by Jing, Vasilakos, Wan, Lu & Qiu [5], contrasting these issues to how they are dealt with on the Internet. The authors thoroughly go into details with all aspects of the pros and cons of each layer‘s security problems with clear references, contrasting their findings with other internet protocols, namely:

  • IoT is composed of mostly RFID and WSN nodes with limited resources, whereas the Internet is made up of computers, servers and smart devices with many resources.
  • The Internet uses advanced algorithms and security measures, and heavy computation, in IoT power is scarce, thus we have to rely on lightweight solutions.
  • Communication in IoT is through slower and less secure wireless band, which can result in information being leaked to third parties.
  • PC and other devices connected to Internet have operating systems with underlying security, where IoT devices only have some code to run the device.

Internet of Things and Privacy

Iot and the concept of Connected

Iot and the concept of Connected

The aim of the note by Mashhadi, Kawsar & Acer [6] is to start a discussion within the HDI and IoT communities to better understand and reflect on the issues of who owns the data created and produced in the IoT environment, and find relevant models to allow users to give permission and control over when and how they share information. The authors do not critically reflect on who owns the data, but indirectly take the stance that the data produced by users is owned by users, not directly backing up this position by any arguments or references. It is just assumes, even thought the title of the paper is “Human Data Interaction in IoT: The Ownership Aspect”.

However they argued that IoT devices collect data from and about people. The authors argue the pros and cons, through many examples, of using secure multi-party computations (SMC) for enforcing and protecting users’ privacy in the IoT domain. The author concludes that the main obstacles are immature technology, but does not touch on another important aspect, namely that IoT devices do not necessarily have the computational powers to carry out computations. The authors provide a model to solve the problems they define, and discuss possible side effects of their solutions, including illustrating the overlapping application domains vs. data sensitivity.

Internet of Things and the Future Internet of Things

A pile of RFID Rings

A pile of RFID Rings

Khan, Khan, Zaheer, & Khan [7] take a perspective view of privacy and security in IoT and Future IoT (FIoT), contrasting it with where it currently is. The authors summarise and categorise several key challenges for IoT and point to government bodies currently working to solve these problems.

The authors also point out not only interoperability issues, but also findability of devices, since IoT devices need not only be aware of their surroundings, but also surrounding devices, which they might need to communicate with to accomplish task or to collect data from. However it is difficult to deploy awareness measures and authentication logic in these rudimentary IoT devices to allow socialising.


In this paper we have briefly looked at the security and privacy issues facing Internet of Things. We have described the four layers of IoT devices and mapped their security challenges. We find that IoT is still in a development stage with security challenges that need to be ironed out before the vision of truly smart devices and mass adoption of the technologies can succeed. Security and privacy are hampered by devices with little power to deal with the complex tasks of encryption and authentication.

It seems that most research base their ideas of the Internet and World Wide Web, where in fact, as many point out, the Internet of Things domain is more complex, since IoT devices are highly autonomous units with little power to make authentication or encryption. We have touched on another need for security, namely privacy of the collected data, so unauthorised third parties cannot gain access to the device and scrape the data for unauthorised use. This is however also a challenge for IoT, since devices are meant to communicate with the outside world and with each other. The question still remains open, as to who and how communication should be controlled.


  • [1] Suo, H., Wan, J., Zou, C., & Liu, J. “Security in the internet of things: a review”, Computer Science and Electronics Engineering (ICCSEE), 2012 International Conference on. Vol. 3. , 2012. IEEE
  • [2] Kermani, M. M., Zhang, M., Raghunathan, A., & Jha, N. K. “Emerging Frontiers in embedded security”, VLSI Design and 2013 12th International Conference on Embedded Systems (VLSID), 2013 26th International Conference on, 2013. IEEE
  • [3] Agrawal, S., & Das, M. L. “Internet of Things – A paradigm shift of future Internet applications”, Engineering (NUiCONE), 2011 Nirma University International Conference on, 2011. IEEE
  • [4] Keoh, S., Kumar, S. & Tschofenig, H. “Securing the Internet of Things: A Standardization Perspective”, , 2014.
  • [5] Jing, Q., Vasilakos, A. V., Wan, J., Lu, J., & Qiu, D. “Security of the Internet of Things: Perspectives and challenges”, , 2014.
  • [6] Mashhadi, A., Kawsar, F., & Acer, U. G. “Human Data Interaction in IoT: The ownership aspect”, Internet of Things (WF-IoT), 2014 IEEE World Forum on, 2014. IEEE
  • [7] Khan, R., Khan, S. U., Zaheer, R., & Khan, S. “Future Internet: the internet of things architecture, possible applications and key challenges”, Proceedings of the 2012 10th International Conference on Frontiers of Information Technology, 2012. IEEE Computer Society


Alan Turing  © Charis Tsevis  © Charis Tsevis https://www.fli

Artificial Intelligence – a very short introduction

Definitions of Artificial Intelligence

Below we highlight four definitions of Artificial Intelligence (AI).

  • Artificial Intelligence is a discipline devoted to the simulations of human cognitive capabilities on the computer” (Rajaram, 1990).
  • Artificial Intelligence is a new science of researching theories, methods and technologies in simulating or developing thinking process of human beings” (Ling-fang, 2010).
  • Artificial Intelligence is an attempt to understand the substance of intelligence, and produce a new intelligent machine that could make reactions similar to the human intelligence (Ning and Yan, 2010).
  • “The capability of a device to perform functions that are normally associated with human intelligence, such as reasoning and manipulating factual and heuristic knowledge (Hosea, Harikrishnan and Rajkumar, 2011).

The field of Artificial Intelligence (AI) connects to with other science fields such as information theory, cybernetics, automation, bionics, biology, phycology, mathematical logic, linguistics, medicine and philosophy (Ning and Yan, 2010).

Hosea, Harikrishnan and Rajkumar (Hosea, Harikrishnan and Rajkumar, 2011) argue that a machine is truly AI if it solves certain classes of problems requiring intelligence in humans, or survives in an intellectually demanding environment. Following this, one could divide the definition into two parts, the epistemological part, that is, the real world representation of facts, and the heuristic part, where the facts help solve the problem through rules. The authors identify four requirements for a device to have in order to be said to have artificial intelligence, and highlight the advantages and disadvantages of Artificial Intelligence.

  • Requirements: Human Emotion; Create data associations to make decisions; Self-consciousness; and Creativity and Imagination.
  • Advantages of AI: No need for pauses or sleep; Rational or pre-programmed emotions could make for better decision-making; Easy to make multiple copies.
  • Disadvantages of AI: Limited sensory input compared to humans; Humans can deteriorate but still function, devices and applications quickly grind to a halt when minor faults set in.

AI is generally seen as an intelligent aid. Humans regard themselves as always making rational optimal choices. In that light intelligent computers will always try to find the correct medical diagnosis or try to win at a game. However, reality is more blurred. Humans can have hidden motives for loosing a game, perhaps to let a child build confidence or prescribe different medicine based on the patients attitude (Waltz, 2006).

Paradigms in Artificial Intelligence

Marvin Minsky

Marvin Minsky

AI evolves more around engineering and has no fixed theories or paradigms. Having said that, the main two paradigms to receive traction are J. B. Baars Global Workspace Theory from his 1988 book “A cognitive theory of consciousness” (Baars, 2005) and the agent-based model independently invented and championed by R. A. Brooks (Brooks, 1990), and Marvin Minsky, in his book “Society of Mind” from 1988 (Brunette, Flemmer and Flemmer, 2009).

Baars “Global Workspace Theory uses a theatre metaphor of a spotlight shining on one area (on the stage), but there is a lot going on behind the scene. Humans can complete and focus on a task, while many others things are going on at the same time.

Minsky: Believes that consciousness is made up of many smaller parts or agents, which collectively work together to produce intelligence.

Brooks: Builds cognition using a layered approach, where each layer can act upon or suppress input from layers below it.

History of Artificial Intelligence

C.E. Shannon.

C.E. Shannon.

The year 1956 and Dartmouth College are regarded as the birthdate and birthplace of AI, since this is the first time the phrase Artificial Intelligence is used. Many of the attendees (John McCarthy, Marvin Minsky, Claude Shannon, Nathan Rochester, Arthur Samual, Allen Newell, and Herbert Simon) become leaders within the field of AI and go on to open departments at MIT, Stanford, Edinburgh, and Carnegie Mellon University (Brunette, Flemmer and Flemmer, 2009).

However, Alan Turing’s Turing Test from 1950, captures the ideas of programming a digital computer to behave intelligently, and Strachey’s Checkers program from 1952 are also examples of intelligent computers (Hosea, Harikrishnan and Rajkumar, 2011), so too is Vannevar Bush’s “Memex”-concept from 1945 and “The Turk” from the eighteenth century (Buchanan, 2005).

Timeline of Artificial Intelligence

Professor John McCarthy

Professor John McCarthy

1950 – 1969: The 1950’s and 1960’s saw a rise in methodologies and applications for problem- solving pattern recognition and natural language processing. The programming language LISP was invented in 1960 by John McCarthy (Brunette, Flemmer and Flemmer, 2009). However these applications have trouble scaling to take on larger problems (Singh and Gupta, 2009). In 1969 the International Joint Conferences on Artificial Intelligence (IJCAI) is formed.

1970 – 1989: The 1970 and early 1980’s saw the rise of expert systems like Deep Blue, but also a dawning of the complexity of AI and the understanding that this was a lot more complicated than first thought. The programming language PROLOG is added to the AI stack, to be able to use logic to reason about a knowledge base. The late 1980’s saw the introduction of intelligent agents that react to their environment (Brunette, Flemmer and Flemmer, 2009).

Robotic hand holding a lightbulb.

Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing.

1990 – 1999: In the 1990’s, intelligent agents, robotics and embodied intelligence find it’s way into R&D projects, with the improvement of computing power, sensors and the underlying theory. Applications begin to focus on helping businesses and organisations. The late 1990’s sees connecting intelligent agents, and leads to the idea of Distributed Artificial Intelligence via the web.

2000 – present: A main focus is adding consciousness, human like behaviour and emotions to machines (Brunette, Flemmer and Flemmer, 2009). Another area of focus is machine learning, data mining, algorithms, and collective intelligence, due to the amount of unstructured available data on the web (and in databases), and the need to make sense of it (Singh and Gupta, 2009). AI also plays a major role in social sciences and Social Network Analysis (Ling-fang, 2010).

The future of Artificial Intelligence: Waltz (Waltz, 2006) predicts that the future of AI, for the next 20 years, will be determined by the interaction of three factors: Financial Factors (funding); Technical Factors (useful applications), Scientific Factors (intelligent progress), with a main focus on “cognitive prosthesis” and semantic applications, i.e. converging to a more industrial revolutionary outlook in helping humans complete tasks they dislike or do poorly. Research into the underlying theory will diminish. Funding will come from private companies like Google, Yahoo and Microsoft in collaboration with academia. NASA, the National Science Foundation (NSF) and other government bodies will not be willing to continue to fund AI research. Waltz identifies five areas that will thrive, namely:

and two other fields: AI theory and algorithms, and Turing Test AI, which Waltz regardeds as wildcard areas, since they can’t realistically produce practical results.

Concepts in Artificial Intelligence

Mosaic portrait of Alan Turing using the mathematical analysis used to decode the Enigma machines during the World War II.

Mosaic portrait of Alan Turing using the mathematical analysis used to decode the Enigma machines during the World War II.

Expert Systems (Expert AI): Expert systems rely on an inference engine and a knowledge base. The engine is often rule based (Rajaram, 1990). Expert Systems are used to assist in decisionmaking. Usage examples: Blood infection diagnostics and credit authorisation (Ling-fang, 2010).

Symbolic Mathematical Systems: Computer programs problem-solve using symbols instead of numbers (Rajaram, 1990).

Intelligent Communication Systems: Allows for communication between humans and machines (Rajaram, 1990).

Signal Based Systems: Signal based communication refers to input (vision and speech recognition) and output (visualisation and speech generation) (Rajaram, 1990).

Example of a Natural Language Processing application.

Example of a Natural Language Processing application.

Symbol Based Systems and Natural Language Processing: Symbol based communication refers to understanding natural language, i.e. semantics or reasoning about what is meant in a sentence (Rajaram, 1990). Currently this is an area that gets a lot of attention, due to the amount of data available on Social Media and on the web (Ling-fang, 2010)

Machine Learning: Machine-learning reasons about data by studying examples and using problem-solving and decision-making skills, rather than following a set of rules (Rajaram, 1990).

Logic-Based Learning Systems: Here the computer uses logic to reason about the input, i.e. if this and this and this is true, then that is true also (Rajaram, 1990).

Biological Analog Learning Systems: Computers built to resemble the biological system of the human body and brain (Rajaram, 1990).

Robotics: The goal is to create machines that can perform task for humans, not only in an industrial age type of way with continuous automation, but to intelligently analyse each step and take action depending on the task needed (Ling-fang, 2010).

The Asimo Robot

A robot is a mechanical or virtual artificial agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry. Robots can be autonomous, semi-autonomous or remotely controlled and range from humanoids such as ASIMO and TOPIO to nano robots, ‘swarm’ robots, and industrial robots. By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.


  • Baars, J. B. (2005) Global workspace theory of consciousness: toward a cognitive neuroscience of human experience?”, Progress in Brain Research, Vol. 150, pp. 45 – 52.
  • Brooks, R. A. (1990) “Elephants Don’t Play Chess”, Robotics and Autonomous Systems Vol. 6, pp. 3 – 15.
  • Brunette, E. S., Flemmer, R. C. and Flemmer, C. L. (2009). “A Review of Artificial Intelligence. Proceedings of the 4th International Conference on Autonomous Robots and Agents., Wellington, New Zealand, p385-392.
  • Buchanan, B. G. (2005). “A (very) Brief History of Artificial Intelligence”, American Association for Artificial Intelligence – 25th anniversary issue, pp. 53 – 60.
  • Hosea, S., Harikrishnan, V. H.  and Rajkumar, K. (2011) “Artificial Intelligence”, 3rd International Conference on Electronics Computer Technology, Vol. 1, pp. 124 – 129.
  • Ling-fang, H. (2010) Artificial Intelligence, 2nd International Conference on Computer and Automation Engineering (ICCAE), Vol. 4, pp. 575 – 578.
  • Ning, S.,  Yan, M. (2010) “Discussion on Research and Development of Artificial Intelligence”, IEEE International Conference on Advanced Management Science(ICAMS 2010), Vol. 1 , pp. 110 – 112.
  • Rajaram, N. S. (1990) “Artificial Intelligence: A Technological Review”. ISA Transactions. Vol. 29 (1), pp 1 – 3.
  • Singh, V. K. and Gupta, A. K. (2009) “From Artificial to Collective Intelligence: Perspectives and Implications”, 5th International Symposium on Applied Computational Intelligence and Informatics, Timisoara, Romania, pp. 545 – 549.
  • Waltz, D. A. (2006) “Evolution, Sociobiology, and the Future of Artificial Intelligence”, IEEE Intelligent Systems, pp 66 – 69.
Richard Stallman

A brief look at Open-Source Software and it’s History

Let it grow © Paco Espinoza

Corn in a field

What is the history and which type of Open-Source Software (OSS) exist and why. Which type of business models can software-vendors apply when entering the OSS market. Open-Source Software are critical components of the Internet and IT systems. Open-Source Software has turned copyright laws and development strategies upside down and it poses a threat to proprietary software business-models.

Open-source software has evolved from being a practical and slightly rebellious undertaking fighting proprietary software and lock-in to being a business-model in itself.

The code has never been proven to better than proprietary code, but has matured tremendously. Up until now, copyright laws have worked as a means to exclude exploitation of the intellectual property rights of the rights-holder, and as a side effect has forced competing businesses to innovate. But with OSS the IP is what drives innovation.

A short timeline of Open-Source Software

When the seed for open-source software was planted some 35 years ago, it was off to a slow start mostly due to misconceptions and the specificness of the software. A circumstance it had been in since the fifties, where specialised software was shipped with the computer itself and maintained by programmers, who would share with fellow programmers the improvements they made.

In the mid 1970’s software companies began to close off the source-code in order to make a profit. (Dale, 2010, p. 563). By the mid 1970’s as operating systems became more sophisticated much software had become closed-source in order to get a return on the development investments (Gonzalez-Barahona, 2000).

Richard Stallman

Richard Stallman – inventor of the GNU Project and the GNU General Public License (GPL). © Preliminares 2013 –

By the early 1980’s former MIT programmer Richard Stallman had launched the GNU Project and a legal tool: the GNU General Public License (GPL) to promote the creating of more open-source software. In the late 1980’s the Computer Science Research Group (CSRG) of the University of California at Berkeley released a variant of the UNIX operating system called BSD UNIX. Although distributed as open-source software, the software author still needed an AT&T license for serveral crucial parts. The early 1990’s saw Linus Torvalds releasing Linux and Bill Jolitz completing a truly propriety-free BSD UNIX called 386BSD (Gonzalez-Barahona, 2000).

A real eye opener to open-source software was the Mozilla project in 1998 and the release of the Netscape browser suite source code, proving what a large community can do when they are allowed to view and build upon the code (Mozilla), a bold move motivated by the release of Eric Raymonds analysis The Cathedral and the Bazaar the year before (Wikipedia, 2011).

Why does Open-Source Software exist

As pointed out above there was a feeling within the developer community that software should be free to change and use, but there are also practical reasons. When a manufacturer of any sort develops a product it is impossible to meet all his customers expectations. For some companies this fact is used as a way to chain users to their products, but other companies see this a chance to encourage user input in the manufacturing process (Hippel, 2001).

Open-source software (OSS) is very often developed in a public, collaborative manner.

What defines Open-Source Software

In the broadest term, open-source software is defined as: software a user can obtain a copy of at no cost. With this copy he is free to use, study and modify the source-code and re-distribute the software – including modifications – as long as the new source-code is licensed under the same or similar license, for instance the General Public License (GPL) also known as “copyleft“. – (Hippel, 2001, p. 84).

This also means that code within the code used from other open-source projects is free to use without any legal implications or inflictions on other developers copyright. Open-source software can be an entire operating system (Linux) or specific applications (OpenOffice).

Does Open-Source Software exist outside IT

Free Beer © The Art Gallery of Knoxville

Free Beer

The idea of sharing and building upon others knowledge is as old as time itself. Mankind and nature evolved by learning from and observing it’s surroundings. Through trial and error, skills where learnt, and tools created and passed on to the next generation, who in turn would evolve these ideas even further before passing them on. The spirit of sharing knowledge exist today in a variety of forms and can be observed, for instance, among surfers (Hippel, 2001).

The Open Source Software Code

Error finding, maintenance and quality of code

There is no evidence that the development process of OSS leads to better or less faulty code (Dale, 2010, p. 564), neither is it faster to develop than proprietary code (Madey et al., 2002, p. 1807). But former CEO of Open Source Development Labs, Steve Cohen, regards OSS to be: “generally great code, not requiring much support” (Cohen, 2008).

Why do people make Open-Source Software

A motive for developing open-source software often cited is “an itch that needs scratching”. Several studies have been made to out why people work for free for the benefit of others. Gaining a reputation, a sense of creativity and obligation, are intrinsic factors where the only extrinsic factor is getting paid (Lakhani et al, 2003).

What are the benefits of Open-Source Software

Kathrine Noyes of PCWorld quotes a Gartner survey among 547 IT leaders in 11 countries that “competitive advantages and lower total cost of ownership were two of the primary drivers” (Noyes, 2011). Also “flexibility, increased innovation, shorter development times and faster precurement processes” where among the main points as well as an expectancy for OSS percentage in of organisations overall portfolio to “reach 30 percent within the next 18 months” from approximately 10 percent only five years ago.

Who is the developer

Participation in open-source projects is a self-organized joint venture collaboration between developers who might never meet in real life often only for pride and satisfaction in return (Madey et al., 2002, p. 1807).

What should business be aware of before developing Open-Source Software

A software license is a legal instrument.

A software license is a legal instrument.

At the Esclipsecon 2005 Tim O’Reilly listed several point to take into consideration. Here are but a few:

    • Design for Participation – modular software, easy to integrate in a larger system and use a OSS license.
    • User-Centered Development – Release early and often, systemise bug fixing, promote active users.
    • Don’t Differentiate on Features – Don’t build exclusive and proprietary features.
    • Follow industry standards – let users decide configuration and support emerging services.
    • The Perpetual Beta – Add new features regularly. Test on users and collect the data.
    • Leverage Commodity Economics – Use the LAMP stack to be able to scale quickly.
    • Users Add value – Involve users and let them add their own data.

(O’Reilly, 2005, p.10-13+15)

Advantages of Open Source Software

FOSS - Free Open Source Software ©

FOSS – Free Open Source Software

Apart from no price tag, there is little administrative hassle and no lost CD’s. The software can be used continuously and indiscriminately. It is easily downloaded which encourage use of speciality software or one time use. It is easy to modify to special needs or plug onto existing OSS. There is some debate about better support, safety and bugfixes regarding OSS vs. Proprietary software (Lamb, 2006).

Disadvantages of Open-source software

OSS is often hailed for it’s huge pool of knowledge-sharing websites that offer help, but there is no formal support and users may be charged a fee when wanting support or help with modifications. Once interest for a piece of software is lost there is often no more support, bug fixes or upgrades. There is also a risk of not being able to exchange files with proprietary software which can lead to workflow problems. (Lamb, 2006)

Open-Source Software and Licenses

The basis of open-source software is licensing. Without signing over the rights to freely use, modify or distribute the code the spirit of OSS is lost. About 50 different types of licenses exist but can be summed up like this:

    • “provide credit”: use, modification, redistribution are allowed, but credit to the original author is due, if redistributed. Examples: BSD license, Apache License v2.
    • “provide fixes”: use, modification, redistribution are allowed, but source code for any changes must be provided to the original author, if redistributed. Examples: Mozilla-style licenses (Mozilla Public License).
    • “provide all”: use, modification, redistribution are allowed, but source code of any derived product must be provided, if redistributed. Example: GPL.”

(Daffara, 2009)

With the advance of OSS-projects growing from smaller to larger more commercial scale OSS the Mozilla Public License is thought to be significant as it relates to corporate thinking (Fitzgerald, 2006).

Business Models for Open-Source Software

Richard Stallman at the USAC © tian2992

Richard Stallman at the USAC answering questions for his talk: “Copyright vs Community”

Proprietary software business models rely on customer “lock in” which often leads to high purchasing, indirect acquisition and operational costs of and for the software. It is often regarded as a “safe buy”. That’s good for suppliers, but not for customers. Although OSS often is viewed as free of charge, this is a misnomer, since companies may still have to deal with operational costs along with possible adjustments to the source-code.

And this is exactly where many open-source software companies make their money. By offering support and customisation of their products. Not only does the customer get software for free, he is also gets the significance of having help, or even someone to shout at and blame, when it all goes south (Williams, 2007).

A problem for many OSS companies is that they only have one product making it difficult to grow a business (Vance, 2009).

What is most profitable? Open-Source Software or Propriety Software?

It is difficult to pick whether an application or platform is most likely to be profitable as open-source or closed-source. Study suggest the importance of evaluating the market as a whole since there are numerous variables to take into account.

A large variety of proprietary applications for an open-source platform can lead to profits above that of a proprietary platform industry, yet users craving a large variety of application leads to proprietary profit, which in turn is offset by the fact that open-source platforms generally have a wider selection of applications (Economides, 2006, p. 1057).

Examples of strategic Open-Source Software

Linus Torvalds, creator of the operating system Linux © angelcalzado

Linus Torvalds, creator of the operating system Linux

Another possible advantage of entering the OSS market is for strategic purposes. In 2006 Microsoft teamed up with Novell who specialise in Linux, an OSS that competes with Microsoft, to form a partnership where Novell can provide interoperability between Linux and Microsoft (Cohen, 2008).

Also Google and IBM have used strategic relations. Google’s financial support towards Mozilla Foundation paints Microsoft’s Internet Explorer into a corner and IBM has for many years backed Linux helping it in becoming a competitor to Microsoft while developing proprietary software of it’s own to be used on Linux (Vance, 2009).

The future of Open-Source Software

Stuart Cohen suggests OSS has reached such a high level of quality, that businesses relying on revenue from support, will find it hard to meet the expectations of their investors, and perhaps companies should collaborate since they often have the same itch that needs scratching. (Cohen, 2008).

The “Future of Open Source Survey – 2011” from North Bridge Venture Partners (450 respondents) suggest that OSS is “fully embraced by organisations in both the public and private sectors”, and pinpoints SaaS, cloud, and mobile computing as growth drivers in OSS markets. Also noteworthy, is that for the first time freedom from vendor lock-in, surpassed lower cost as a point of attraction. (North Bridge, 2011)


Open-source software has evolved from being a practical and slightly rebellious undertaking fighting proprietary software and lock-in to being a business-model in itself.

The code has never been proven to better than proprietary code, but there seems to be a consensus that it has matured tremendously, and this could in fact have an impact on the businesses that rely on selling support.

Up until now, copyright laws have worked as a means to exclude exploitation of the intellectual property rights of the rights-holder, and as a side effect has forced competing businesses to innovate. But with OSS, including others unlimitedly in the IP is what drives innovation.

Companies do not only see OSS as a business opportunity, but also a strategic means to battle with the competition.

The future for businesses wanting to enter the OSS market looks bright if they research their respektive markets to position themselves.