Miklas Njor
Epoch time that is way off

Correcting Epoch time strangeness

I was accessing data from an API and made an Exploratory Data Analysis (EDA). One irregularity that I noticed, was that the epoch time was off by roughly forty-six thousand years! How the epoch time came to be so long, could stem from the decimal “dot” had somehow been removed.

A hack work-around was to convert the epoch time to a string variable, chop off the last three digits, insert a decimal dot, and glue the string back together as an float. This way I was able to get the correct value. As far as I can see, this way of doing it, is safe for values up to the year 9999, possibly more.

Mapping key/value in semi-structured data in Python

 Wordcloud Diskussion

Wordcloud Diskussion

I had to merge several wordclouds into a larger wordcloud based off all keywords and counts, and I only had access to the textfiles containing the keyword and the keyword count, not a list with all the keywords.

In order to be able to do a re-count and to build the new list, I needed to create a list containing the exact count of each keyword.

Luckily the structure was the same for all pairs, and since it was only one token (keyword) in each pair it could be done in one loop.

An alternative way to solve this problem, could be to build a “dict” of tokens and add upp the count each new time a matching token was found.

You can see the code below.

 





How to add collapsible comment form to your wordpress blog

Works in: firefox, safari, chrome.

Possible conflicts:
Problem with WP Ajaxify Comments, which makes the page jump to the top. It also updates the top first post, which might only be a problem if you use the Auto Load Next Post plugin.

Set up

Abstract: Users expect to be able to comment on your blog posts, however, very few users choose to actually do so. The comment box takes up a lot of space. Removing the comment box is out of the question, because you want people to be able to comment. This creates a gap between expected usability and functionality, and your website’s design. Comment boxes take up a lot of screen real estate, since users need to fill in at least three fields: their name, email address and the comment itself. On top of that, there is the submit button and perhaps checkboxes for also subscribing to the blog, new comments, and perhaps even buttons to use some API’s to login via social accounts. So the comment area needs to be there just in case, but still not be in the way. This is good information architecture…

Jetpack has a nice workaround to this problem, where the box expands and buttons come into view once the user clicks inside the comment field. This is great usability, but I couldn’t get this feature to work as expected on my blog. No matter what I did (CHMOD’ing files, changing .htaccess, deactivating plugins), I would get an error once a comment was submitted that said: “You don’t have permission to access /wp-comments-post.php”.

A second problem was, that I use the Auto Load Next Post plugin to create an infinite scroll of related posts. If the comment box takes up too much space, it is difficult for the user to figure out, that new posts are featured under the post that they are reading.

A possible workaround could be to print to the screen that new posts have been loaded below, but constantly showing a message would eventually drive users nuts, and then you would have to write a script that keeps track of how many times this modal screen had appeared infront of the user, and set a cut off point, perhaps add this to a cookie for when users came back to the site. Yada-yada. The list goes on…

So I decided to just add a simple javascript toggle on/off button to open and close the collapsable element, letting users who wish to comment on my blog, be able to do so, and the 99% percent who don’t comment, aren’t bothered by the large comment box.

Tools you need:

For editing files you need notepad if you are on Windozs; or text-edit if you are on a Mac. If you are on Linux, you hopefully know what to do, right (because I don’t!). Some other decent IDE’s (fancy name for code editors) are TextWrangler, TextMate or Komodo Edit. You also need a program to transfer the files to and from your server. FileZilla comes to mind, or Transmit.

Files you need:

  • comments.php (in your theme’s folder – use your file transfer program to fetch it)
  • functions.php (in your theme’s folder – use your file transfer program to fetch it)
  • header.php (in your theme’s folder – use your file transfer program to fetch it)
  • style.css (in your theme’s folder – use your file transfer program to fetch it)
  • js.js (this file you will create and upload to you theme folder via the file transfer program)

A note about functions.php
When you update your theme, it overwrites (deletes) every file and subsequently any customisation you have made. This is slightly problematic… If you (perhaps correctly so) feel that you shouldn’t edit your theme’s functions.php file, you can either create a child theme based off your current theme and add the functions.php, style.css and js.js to the child theme folder, or you can create what is know as a Must Use plugin. It sounds more scary than it is. There are many places to read about how to create both, see references below. Another way around this is to create a file called “myNotes.html” or similar, and add it to your top folder (don’t store passwords) but use it to keep notes about things you added to your site, or install the Note Press plugin to keep notes on your site in the admin area. This is also good if you are many people running the site.

Ok, enough talk already, lets get to it.

Creating the collapsible javascript comment form for WordPress

Step 1 (comment_form())

First off, you need to make sure that you use comment_form() in your theme. You can check this in your comments.php file in your theme’s folder. The statement will look something like this
<?php comment_form(); ?>.
If you don’t see comment_form() anywhere in you comments.php file, check other files called comments something, or try to email the theme developer. As a last resource, you might want to change themes, as this is an indication of a theme that doesn’t follow current standards, and as such might have other problems.
The reason you need to make sure that your theme uses comment_form(), is that we are going to write some code that modifies it, so if it’s not there, we can’t modify it.

Step 2 (create javascript file)

Create a file called “js.js” (.js stands for javascript). You can name your javascript file anything, but just make sure, that if you name it “jeronimo”, that the file has “.js” appended as a prefix to it, so browsers know that it is a javascript file.

Step 3 (write javascript)

in your js.js file (or whatever you called it) add the following code:

Save the file and upload it to the theme folder.

This shouldn’t clash with the other functionality of your site, or with other jQuery features which many WordPress sites use. I had some trouble getting jQuery to play nice while creating this collapsible comment box, therefore the functionality is written in plain javascript. The code is from CSS Tricks.

Step 4 (add link to javascript file)

Open up your header.php and link to your javascript file. If you are unsure how to link to it from the header here is the way I do it:

this is added in the header.php in the

section just above

and make sure it is not between any

tags.

Save and upload the file to your theme folder.

Step 5 (functions.php)

Open your functions.php from your theme’s folder. If you don’t have a file called functions.php (very unlikely), create a new file and name it functions.php and add an opening and closing PHP statement as so:

Just above the closing ?> at the bottom of the file (functions.php), you add the following code. I have added comments denoted by // to explain what is going on.

Save and upload the file.

Step 6 (CSS)

If you want to go all artsy fartsy, you can add the following to your style.css

that way people can see that the button has some sort of function…

References

The javascript code:
https://css-tricks.com/snippets/javascript/showhide-element/

If you want to add the button somewhere else, you have a few options, however the setup is slightly different and you will have to find information elsewhere:
http://codex.wordpress.org/Function_Reference/comment_form
http://justintadlock.com/archives/2010/07/21/using-the-wordpress-comment-form

How to create must use (MU) plugins:
From Justin Tadlock (a very good explanation):
http://justintadlock.com/archives/2011/02/02/creating-a-custom-functions-plugin-for-end-users
From the WordPress Codex: (just scrapes the surface):
http://codex.wordpress.org/Must_Use_Plugins

Child themes
http://code.tutsplus.com/tutorials/child-themes-basics-and-creating-child-themes-in-wordpress–wp-27475
http://codex.wordpress.org/Child_Themes

Jetpack related posts and qTranslate-X fix

Problem:
I had some trouble getting Jetpack’s Related Posts to play nice with qTranslate-X. The issue I had, was that Jetpacks Related Posts did not translate the post’s headlines and link title tags, and since this site is mulitlingual and most of the articles are both in Danish and English, this is a big problem.

 Obviously with the many settings in the WordPress eco system, the fault could lie elsewhere, however a lot of switching off of plugins and settings, clearing caches and looking under the hood etc., led me to believe, that the fault is with Jetpack. The fix is pretty simple, except it will be overridden next time I update the plugin, so I have notified the developers of Jetpack.

In line 3, in the code snippet below, the original code is

which means that the posts title is returned to the system and stripped of all HTML tags (syntax). The title is however not internationalised, in the sense that the system doesn’t account for any translations of the post title string.

The solution
The solution is to add __() around the $post_title so the line becomes

The place to make changes is: wp-content/plugins/jetpack/modules/related-posts/jetpack-related-posts.php on line 697 (in version: 20150408 of the plugin)

Multimedia content are complex objects, with sounds, audio, subtitles, moving and ever changing images. © James Nash - https://flic.kr/p/8QqhpK

The Big Picture – Multimedia Ontologies and MPEG-7 (part 1 of 2)

Multimedia content are complex objects, © James Nash - https://flic.kr

Multimedia content are complex objects, with sounds, audio, subtitles, moving and ever changing images. © James Nash – https://flic.kr/p/8QqhpK

As more and more multimedia content is digitalised and added to the Web or Digital Libraries, the need for ontologies to connect the meaning and relationships of the pictured objects increases. But how are we to connect the dots, when the dots are difficult to see, let alone interpret. This can however be overcome by ontology matching, a “bridging of the semantic gap by matching a multimedia ontology against a common-sense knowledge resource” (James, Todorov & Hudelot, 2005).

Multimedia content are complex objects, with sounds, audio, subtitles, moving and ever changing images. Which language are the actors speaking, are they speaking the same language, and are the subtitles in yet another tongue? Is what’s being said relevant or just background chatter?

MPEG-7 is the defacto container for ontologies, and mapping and conversion between ontologies and various syntax is important. It is obvious that there are many challenges when it comes to creating ontologies for multimedia, with the wealth of competing metadata formats and standards, and heterogeneous ontologies being the largest hurdles to overcome. As the amount of multimedia grows, so does the need for a fix-all solution.

Creating yet another standard to bridge the semantic gap, will not solve the problems of annotating multimedia content, since the content quickly becomes too complex and the tools to extract and reason the ontologies vary in quality. The interoperability issues will continue to exist and so will the problem of which levels of granularity and abstraction used to describe the multimedia content is best. Automated low-level interpretation of what’s going on in the media file has become easier to annotate and decipher by machines, but high level descriptions still remain a challenge. How do we define instances of relevance? Read on for an overview of multimedia ontologies that use the MPEG-7 standard.

What are Multimedia Ontologies

 Russion Dolls

Ontologies and multimedia are like russian dolls with seemingly identical containers containing various levels of metadata, mirroring each other. © James Lee – https://flic.kr/p/9tT9Yh

Ontologies and multimedia are like russian dolls stuck in a kaleidoscope. Seemingly identical containers containing various levels of metadata, mirroring each other. When working with ontologies and multimedia, the object itself is not the only thing referenced, but, also, the content itself needs description. The “aboutness” of a file is embedded in the metadata, however, problems on several levels are likely to arise, since computers don’t see, they read, making automatic creation of ontologies difficult.

When it comes to semantic interpretation, the ontology is in the eye of the beholder. And the bridge crossing the semantic gap of multimedia has a lot of eyes starring at it trying to fix the problems. Werner Bailer (Bailer, 2011) points out three main problems:

  • Integrating Different Standards
  • Lack of Formal Semantics
  • Deployment of Multimedia Metadata

Why MPEG-7 looks like a winner

MPEG-7 (also known as the Multimedia Content Description Interface) is a ISO/IEC standard. Developed in 2002 by the Moving Picture Experts Group (MPEG) as a tool to deal with not only metadata but also the description of structural and semantic content. MPEG-7 defines multimedia Descriptors (Ds), Description Schemes (DSs) and their relationships. The Description Schemes group the Descriptors (visual, texture, camera motion, audio, actors, places, semantics). For low-level descriptors, the annotation is often done automatically and for high-level descriptors, the annotation is manual. The Description Definition Language (DDL) ties the knot and forms the core part of the standard. DDL is written using XML Schema (Troncy, Celma, Little, García & Tsinaraki, 2007).

What MPEG-7 solves

The MPEG-7 standard describes low level features e.g. texture, camera motion or audio/melody, while the description schemas are metadata structures for capturing and annotating audio-visual content in a more abstract way with descriptors using Description Definition Language (DDL). MPEG-7 is defined as an XML schema with defines 1182 elements, 417 attributes and 377 complex types. Without any formal semantics this can cause interoperability issues when extracting or entering data.

What MPEG-7 doesn’t solve

Unlike domain ontology objects, multimedia objects often feature juxtaposed items present at the same time who’s complex relationships are to be mapped. As such, missing semantic descriptions of concepts appearing in multimedia objects can result in ambiguous and inconsistent descriptions, one of the main hurdles for MPEG-7.

Interoperability issues

Complexity Science

In general usage, complexity tends to be used to characterize something with many parts in intricate arrangement. The study of these complex linkages is the main goal of complex systems theory.

The reason MPEG-7 is often cited as the best standard for multimedia is it‘s level of granularity and levels of abstraction. As Hiranmay Ghosh (Ghosh, 2010) points out: “The goal of a multimedia ontology is to Semantically integrate distributed heterogeneous media collections (bridging the gap) and integrate multiple media types.” But there is a downside too. The MPEG-7 Schema defines more than a thoussand elements, half as many attributes and complex types. Without any formal semantics this can cause interoperability issues when extracting or entering data. The interoperability issues and complexity can, according to Troncy, Celma, Little, García & Tsinaraki, also be experienced as a burden.

Why the need for interoperability

There exists many types of metadata and metadata standards, data types and applications which process the file formats. As mentioned above, Werner Bailer points out three main problems, which we look a little closer at below.

Integrating different standards

A multimedia file‘s life cycle can be very complex, with many people dealing with the file at the various stages from production to finished product and use. There is no ideal standard which covers work-scenarios. Structural descriptions and low-level audiovisual features in MPEG-7 work well for some standards, but might not fit other standards, and the concepts of objects are written in RDF/OWL, which for some standards there exists none or limited tools for reasoning.

Lack of Formal Semantics

Semantic elements are far from always properly defined and there are many alternatives to model the same descriptions. This makes it difficult to validate and understood by all software. A way to solve ambiguities could be to use a limited set of description tools.

Deployment of Multimedia Metadata

Many metadata formats exist and the metadata pertaining to the file is often published alongside the multimedia piece itself. This makes it difficult to process automatically and the results can therefor be unreliable. Bailer concludes that: “Semantic technologies are not optimal for all types of data and there are limitations w.r.t. Scalability”

Requirements for a Multimedia Ontology

 Battel of ontologies

May the best ontology win. © Yutaka Seki – https://flic.kr/p/thpARF

When Arndt et al. set out to create their multimedia ontology (COMM) (Arndt et al., 2007), the authors defined six requirements for designing a multimedia ontology:

  • MPEG-7 Compliance: as this is the standard used worldwide by the broadcasting community.
  • Semantic Interoperability: sufficiently explicitly described. Ensuring that the intended meaning can be shared amongst different systems.
  • Syntactic Interoperability: An agreed-upon syntax e.g. OWL, RDF/XML or RDFa.
  • Separation of Concerns: Clear separation of administrative and descriptive labeling.
  • Modularity: Minimise the execution overhead.
  • Extensibility: The underlying model and assumptions should always be stable and ensure that new concepts can be added to the ontology without clashing with older models.

Suárez-Figueroa, Ghislain & Corcho review the most well-known and used ontologies in the multimedia domain from 2001 to 2013, based on free available RDF(s) or OWL, and present a framework: FRAMECOMMON (Suárez-Figueroa, Ghislain & Corcho, 2013). The authors highlight three criteria to look out for when developing or deciding on a multimedia ontology, namely:

  • Which multimedia dimensions (audio-visual, image, video etc) are covered by the ontology.
  • Documentation and code quality, and how easy is the ontology to pick up and use.
  • Is the ontology trustworthy and free of irregularities.

Also Dasiopoulou et al. highlights conceptual clarity and well-defined semantic models, and point out, that much of the metadata and semantics from multimedia products remain tucked away from the semantic web, due to scalability problems of representation and the capturing of contextual information (Dasiopoulou et al., 2010).

A list of Multimedia Ontologies

The list of Multimedia Ontologies contiues here

References

Ontologies and multimedia are like russian dolls with seemingly identical containers containing various levels of metadata, mirroring each other.  © James Lee - https://flic.kr/p/9tT9Yh

The Big Picture – A list of Multimedia Ontologies for MPEG-7 (part 2 of 2)

A list of Multimedia Ontologies for MPEG-7

This is a continuation of the post about what multimedia ontologies are, and the requirements one should try to apply.

On the basis of the requirements for multimedia ontologies, here follows a list of ontologies for describing MPEG-7 multimedia and multimedia content.

M3O – Multimedia Metadata Ontology

M3O sets out to solve the problem of too narrow metadata and specific media types, which cannot be used in conjunction to describe multimedia presentations. Unlike existing metadata models, the M3O is not locked into particular media types and allows for integrating the features of the different models and standards we find today (Dasiopoulou et al., 2010).

M30 is based on DOLCE+DnS Ultralight and uses a generic modelling framework to represent multimedia metadata, by adopting existing metadata models and metadata standards. The M3O is based on five Ontology Design Patterns (ODP) representing data structures:

  • Identification of resource.
  • Separation of information objects and realisations.
  • Annotation of information objects and realisations.
  • Decomposition of information objects and realisations.
  • Representation of provenance information.

M3O is aligned with COMM (see below), MRO (see below) and EXIF , and the ontology is targeted multimedia presentations on the web. M3O has medium quality documentation and high code clarity. It’s drawbacks are missing options for annotations (Suárez-Figueroa, Ghislain & Corcho, 2013).

Harmony MPEG-7 based ontology

Translations of MPEG-7 definitions follow the original MPEG-7 schema, where content and segments are modelled as classes. Entities can have more than one semantic interpretation. However, this leads to ambiguities when interpretations travel to other parts of the ontology (Dasiopoulou et al., 2010).

MRO – Media Resource Ontology

Developed by the W3C Media Annotation Working Group, the MRO defines a set of minimal annotation properties for describing multimedia content together with a set of mappings between the 23 main metadata formats in use on the Web (IPTC, MPEG-7, XMP, Dublin Core, EXIF 2.2, DIG35, Media RSS, TV-Anytime and YouTube API Protocol, among others).

MRO maps multimedia metadata to ontology elements describing: identification, content description, relational, copyright, distribution, parts, and technical properties. It has strong interoperability among many metadata formats, along with ontology properties describing media resources. MRO is used for annotation and analysis and has high quality documentation and high code clarity. MRO drawbacks are missing options to create annotations (Suárez-Figueroa, Ghislain & Corcho, 2013).

COMM – Core Ontology for Multimedia

The intention of COMM is to ease multimedia annotation and solve the formal properties (defined by the creators of COMM) of a high quality multimedia ontology: MPEG-7 compliancy, semantic interoperability, syntactic interoperability, separation of concerns, modularity and extendability. COMM is used for annotation, has a modular design which facilitates extensibility with other ontologies and the ontology is based on DOLCE and implemented in OWL DL. It uses design patterns for contextualisation called Descriptions and Situations (DnS) and information objects called Ontology for Information Object (OIO) (Suárez-Figueroa, Ghislain & Corcho, 2013). The four design patterns are:

(Dasiopoulou et al., 2010)

COMM has high quality of documentation and code clarity. It’s main drawbacks are the missing options to create disjoints, or setting domain or range properties (Suárez-Figueroa, Ghislain & Corcho, 2013). COMM covers structural, localisation and media description schemas, as well as low-level descriptors of the visual part with room for information about algorithms and parameters to extract descriptions (Dasiopoulou et al., 2010).

  • OWL DL ontology.
  • Designed manually.
  • Based on the foundational ontology DOLCE.
  • Viewed using Protege and validated using Fact++-v1.1.5.
  • Upper-level ontology providing a vocabulary independent of domain independent and explicitly includes formal definitions of foundational categories.
  • Eases linkage of domain-specific ontologies because of the definition of top level concepts.
  • Covers the most important parts of MPEG-7 used for describing structure and content.

Hunter’s MPEG-7 Ontology

  • Extended and harmonised using the ABC upper ontology for applications in the digital libraries and eResearch fields.
  • OWL Full ontology containing classes defining media types and decompositions from the MPEG-7 Multimedia Description Schemes.
  • Can be viewed in Protégé and validated using the WonderWeb OWL Validator. Used for describing decomposition of images and their visual descriptors.
  • For use in larger semantic frameworks.
  • The ability to query abstract concepts is a result of being harmonised with upper ontologies such as ABC.

The following ontologies (MPEG-7 upper MDS, MPEG-7 Tsinakari, MSO and VDO, and MPEG- 7 Rhizomik) are the results of transforming the MPEG-7 standard to ontology languages based on a monolithic design.

MPEG-7 upper MDS

The aim of MPEG-7 upper MDS ontology is reuse by other parties for exchanging multimedia content through MPEG-7 by using the upper part of Multimedia Description Scheme (MDS) of the MPEG-7 standard. It is used for annotation and analysis and uses OWL-Full. The MPEG-7 upper MDS has low quality of documentation and low code clarity. It’s main drawbacks are missing options to assert inverse relationships (Suárez-Figueroa, Ghislain & Corcho, 2013).

MPEG-7 Tsinaraki

Built using OWL DL, MPEG-7 Tsinaraki spans MPEG-7 MDS and the classification schemes and parts of the MPEG-7 Visual and Audio Parts. It is used for annotation, retrieval and filtering for Digital Libraries and has low quality of documentation and medium code clarity. It’s main drawback is that it uses different naming criteria (Suárez-Figueroa, Ghislain & Corcho, 2013).

  • Written in OWL DL and captures the semantics of the MPEG-7 MDS (Multimedia Description Schemes) and the Classification Schemes.
  • Visualised with GraphOnto or Protégé. Validated and classified with the WonderWeb OWL Validator.
  • Integrated with OWL domain ontologies for football and Formula 1.
  • Used in many applications, including audiovisual digital libraries and e-learning. The XML Schema simple data-types defined in MPEG-7 are stored in a separate XML Schema to be imported in the DS-MIRF ontology.
  • XML elements are generally kept in the rdf:IDs of the corresponding OWL entities, except when two different XML Schema constructs have the same names.
  • The mapping ontology also captures the semantics of the XML Schemas that cannot be mapped to OWL constructs making it easy to return to the original MPEG-7 description from the RDF metadata.
  • Original XML Schema is converted into a main OWL DL ontology and a OWL DL mapping ontology keeps trace of the constructs mapped allowing for conversions later on.

MSO – Multimedia Structure Ontology

The aim of MSO is to support audiovisual content analysis and object/event recognition, to create knowledge beyond object and scene recognition through reasoning, and to enable user-friendly and intelligent search and retrieval. MSO covers MPEG-7 MDS and combines high level domain concepts and low level multimedia descriptions, enabling new content analysis.

The purpose of many tools using MSO is to automatically analyse content, create new metadata and support intelligent content search and information retrieval. MSO has medium quality documentation and high code clarity. It’s reliability pitfalls lie in the difficulty of merging concepts in the same class, and missing options for creating disjoints (Suárez-Figueroa, Ghislain & Corcho, 2013).

MSO largely follows the Harmony Ontology. However, in order to map explicitly the multiple interpretations the attributes in MPEG-7 come with, for instance for mapping and differentiating between frames and keyframes (which help in prioritising what to search for), MSO introduces new classes and properties not present in the Harmony Ontology. MSO, unlike Harmony, modulates structural and low-level descriptions, splitting definitions into two ontologies, making it easier to model domain specific ontologies by linking them together (Dasiopoulou et al., 2010).

VDO – Visual Descriptor Ontology

Although labelled as a visual ontology and not specifically for multimedia, VDO (available in RDF(s) and uniformed to DOLCE) uses the MPEG-7 standard for automatic semantic multimedia analysis of multimedia content similar to MSO. VDO has high quality documentation and medium code clarity, with some reliability pitfalls with merging concepts in the same class and no options for creating annotations (Suárez-Figueroa, Ghislain & Corcho, 2013).

MPEG-7 Rhizomik

The MPEG-7 Rhizomik ontology – in contrast to MSO/VDO/Harmony – assist in automatically translating the MPEG-7 standard to OWL via XSD2OWL and RDF2OWL mappings, and covers the complete MPEG-7 standard. Although generally good for automation, it is regarded as challenging to connect it to domain ontologies, and dealing with naming conflicts.

Easy linkage to domain ontologies makes MPEG-7 Rhizomik dovetail well with Semantic DS’s, however this is difficult due to opposing ontologies naming criteria. MPEG-7 Rhizomik’s strict conceptualisation model requires remapping of existing definitions to merge with the MPEG-7 model (Dasiopoulou et al., 2010).

MPEG-7 Rhizomik has low quality of documentation and low code clarity. It’s main drawbacks are domain or range in properties; usage of different naming criteria; same URI for different ontology elements; difficulty in merging concepts in the same class (Suárez-Figueroa, Ghislain & Corcho, 2013).

  • Maps XML Schema constructs to OWL constructs following generic XML Schema to OWL together with an XML to RDF conversion.
  • Covers the whole standard and the Classification Schemes and TV Anytime. Visualised with Protégé or Swoop and validated/classified using the WonderWeb OWL Validator and Pellet.
  • Corresponding elements are both defined as containers of complex types and simple types. Automatically mapping of XML Schemas to OWL ontologies via ReDeFer.
  • Used with other large XML Schemas in the Digital Rights Management domain like MPEG-21, ODRL and the E-Business domain.

Conclusion

There have been many more attempts by the academic world to crack the multimedia ontology nut and this has created more or less heterogenous solutions. We have in this article given a quick overview of the majority of ontologies based on the MPEG-7 standard, listing their main features, technical properties and drawbacks.

There exist many types of metadata and metadata standards, but MPEG-7 is the most widespread multimedia standard, although it too is constrained by the use of XML and the interoperability problems this presents when mapping syntactic data and semantics, i.e. a lack of standardised correspondence between XML schema for definitions and RDF Semantic Web languages.

We find that many ontologies have been built to bridge the semantic gap, but no single ontology fits all scenarios or formats. Semantic elements are far from always properly defined and many alternative ways to model the same descriptions exist, making it difficult to validate or make ontologies understandable by all software. A way to solve ambiguities could be to use a limited set of description tools.

Ontologies wanting to provide full interoperability will have to providing full coverage of the MPEG-7 features leading to flexible structures, whereas ontologies for reasoning will have to enforce a more rigorous structure, which can become inflexible. It is also worth noting that if metadata is expected to carry semantics this could lead to verbose and large files, which in turn can make the information redundant.

Nonetheless, the COMM ontology highlights the significance of formally founded standardised description models and shows promising results by using a modular multimedia ontology based on an upper ontology (DOLCE), making it extensible and easy to integrate with domain ontologies.

References

 

Chord

Timeline

Geomap

Navigator

Faceted Search

{ IPTC:Headline | }

Security and Privacy in the Internet of Things

Abstract: The aim of this review report is to gain a broad understanding of privacy and security in IoT and the problems and open issues concerning this area.

Introduction

Internet of Things (IoT) use mainly Wireless Sensor Networks (WSN) or Radio Frequency IDentification (RFID) to communicate and connect to the outside physical world. IoT, and WSN and RFID technologies are regarded by many researchers as insecure and still partly in the development stages. The key challenges for making IoT more widespread is adding better security between the layers of the IoT devices, and when communicating with the outside world.

The security aspect will help in dealing with the privacy aspect which is equally important, since users have to be able to trust that the data the IoT device collects, are not leaked to unauthorised parties. IoT is built upon the idea of the Internet, however IoT is a more challenging area to secure than the Internet, since IoT devices have limited resources.

mind map about mobile

A mind map of the central idea where thinking about mobile isn’t just “thinking about mobile devices” but also technologies, ideas and approaches. What difference does mobile make to user experience? How do we deal with interfaces which aren’t any longer about screens? What are the privacy implications of crowd sensing?
© Mike https://flic.kr/p/8RU8QS

Literature Review

We have searched for literature using Malmö University’s Summon and Google Scholar. The search terms used are IoT“, “Internet of Things“, “privacy“, “security”, “survey”, “state of the art either as single terms or in combination. We have accessed and read abstracts of some hundred papers, downloaded about 30 papers of which we find seven papers to be relevant to our aim of getting an overview of the domain of security and privacy in IoT, and where it is heading. Thus our focus for the chosen papers are on surveys, reviews and state of the art.

Results

Here we present and discuss the papers we find relevant to privacy and security in IoT.

Internet of Things Architecture and Security

A discussion and review of the current research on security requirements of IoT based on the four layers of the IoT technology (Perceptual, Network, Support and Application Layer) is presented by Suo, Wan, Zou, & Liu [1]. The authors highlight security in IoT as more challenging than security on Internet, since it is difficult to verify that devices have been breached, and that the research community should pay more attention to confidentiality, integrity and authenticity of data.

There are four levels of an IoT application:

Below we describe each layer, their security features and security requirements using definitions by Suo, Wan, Zou, & Liu [1].

The Perceptual Layer

  • Description: Gathers data from equipment (RFID readers, GPS sensors, etc.) it is attached to. The data can be of such as a devices’ geo-position data or surrounding temperature etc.
  • Security Features: Access to storage and power is limited, thus it is difficult to set up protection or monitoring security breaches.
  • Security Requirements: To deal with authentication, the authors highlight cryptographic algorithms and cryptographic protocols with a small foot-print.

The Network Layer

IoT Functional, Usable, Wearable

Panel: The Internet of Things Revolution – Functional, Usable, Wearable (AppsWorld London Notes) Notes from the AppsWorld Europe 2013 panel “The Internet of Things Revolution – Functional, Usable, Wearable” with Tamara Roukaerts, Saverio Romeo, Paul Lee, Ben Moir and Mike Barlow.
© Mike Barlow https://flic.kr/p/8RU8QS

The Support Layer

  • Description: The Support Layer deals with data processing and decision-making based on the collected information. The layer also unites the Network Layer and the Application Layer.
  • Security Features: Difficulties lie in actually knowing whether the data being processed is valid input or a virus.
  • Security Requirements: Anti-virus protection, encryption algorithms and encryption protocols with a small foot-print.

The Application Layer

what mobile really means

Activate the world (or: what “mobile” really means)

  • Description: The Application Layer is the out most layer facing the users of the IoT device or service, and will often feature some kind of user interface.
  • Security Features: Controlling who has access to the device’s data and which parts of the data, and to whom the device is allowed to share the data with.
  • Security Requirements: Access authentication to protect user privacy and education of users about password management.

Using two case studies of smart homes and medical implants, Kermani, Zhang, Raghunathan, & Jha [2] methodically highlight the problematic areas of embedded systems, how they can be exploited, and further describe possible solutions and workarounds for better hardware and software security for IoT devices.

IoT challenges and opportunities

RFID Chip

RFID Chip

A good historical background of Internet of Things and definition of “thing” is discussed by Agrawal & Das [3], where the authors explain the underlying technologies (WSN and RFID) and pick at the security and privacy concerns and problems of these technologies, as well as the interoperability issues of trust and heterogeneous sources communicating. The authors list many challenges and opportunities for Internet of Things. We acknowledge that the elements are highly connected, however we choose to only highlight and comment on challenges and opportunities of security and privacy in IoT.

Security and privacy challenges

Example of Cryptography

Cryptography is the practice and study of techniques for secure communication in the presence of third parties. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering.

The challenges regarding security and privacy highlighted by [3] are:

  • Standards: Mass IoT rollout requires standardisation of many elements.
  • Privacy: Securing user-device security.
  • Identification and Authentication: Privacy control via authentication.
  • Security: Device communication and inter-communication must be secure.
  • Trust and Ownership: User-trust in collected data.

Security and privacy opportunities

The opportunities regarding security and privacy highlighted by [3] are:

  • Insecure and not Secure: Security software vendors will have an entirely new area to safeguard, however IoT security is complex to manage.
  • Reachability: Ipv6 addresses on every element will make every device reachable, if standards are in place to secure interoperability.
  • Efficiency: Tied to Reachability above, where devices sense and communicate with their surroundings, to help with logistics, tracking and management of data.

Internet of Things and standardisation

Carrying four RFIDs

Carrying four RFIDs

The security perspective of IoT from a standardisation point of view, is argued by Keoh, Kumar & Tschofenig [4], methodically mapping problems facing IoT security to how they can be – and in many ways already are – solved by standardisation. They highlight the efforts of the Internet Engineering Task Force to standardise security within the IoT. Although slightly biased towards their own achievements, they thoroughly examine, evaluate and analyse many problems and levels of security. The also conclude by adding perspectives to Moore’s law and the problem of many new devices’ high power consumption.

Internet of Things contrasted to Internet

Radio-frequency identification (RFID) is the wireless non-contact use of radio-frequency electromagnetic fields to transfer data, for the purposes of automatically identifying and tracking tags attached to objects. Some tags require no battery and are powered and read at short ranges via magnetic fields. Others use a local power source and emit radio waves (electromagnetic radiation at radio frequencies).

The analysis of the security aspects of each layer in IoT objects, their cross layer issues with heterogeneous integration and the security aspects of IoT is addressed by Jing, Vasilakos, Wan, Lu & Qiu [5], contrasting these issues to how they are dealt with on the Internet. The authors thoroughly go into details with all aspects of the pros and cons of each layer‘s security problems with clear references, contrasting their findings with other internet protocols, namely:

  • IoT is composed of mostly RFID and WSN nodes with limited resources, whereas the Internet is made up of computers, servers and smart devices with many resources.
  • The Internet uses advanced algorithms and security measures, and heavy computation, in IoT power is scarce, thus we have to rely on lightweight solutions.
  • Communication in IoT is through slower and less secure wireless band, which can result in information being leaked to third parties.
  • PC and other devices connected to Internet have operating systems with underlying security, where IoT devices only have some code to run the device.

Internet of Things and Privacy

Iot and the concept of Connected

Iot and the concept of Connected

The aim of the note by Mashhadi, Kawsar & Acer [6] is to start a discussion within the HDI and IoT communities to better understand and reflect on the issues of who owns the data created and produced in the IoT environment, and find relevant models to allow users to give permission and control over when and how they share information. The authors do not critically reflect on who owns the data, but indirectly take the stance that the data produced by users is owned by users, not directly backing up this position by any arguments or references. It is just assumes, even thought the title of the paper is “Human Data Interaction in IoT: The Ownership Aspect”.

However they argued that IoT devices collect data from and about people. The authors argue the pros and cons, through many examples, of using secure multi-party computations (SMC) for enforcing and protecting users’ privacy in the IoT domain. The author concludes that the main obstacles are immature technology, but does not touch on another important aspect, namely that IoT devices do not necessarily have the computational powers to carry out computations. The authors provide a model to solve the problems they define, and discuss possible side effects of their solutions, including illustrating the overlapping application domains vs. data sensitivity.

Internet of Things and the Future Internet of Things

A pile of RFID Rings

A pile of RFID Rings

Khan, Khan, Zaheer, & Khan [7] take a perspective view of privacy and security in IoT and Future IoT (FIoT), contrasting it with where it currently is. The authors summarise and categorise several key challenges for IoT and point to government bodies currently working to solve these problems.

The authors also point out not only interoperability issues, but also findability of devices, since IoT devices need not only be aware of their surroundings, but also surrounding devices, which they might need to communicate with to accomplish task or to collect data from. However it is difficult to deploy awareness measures and authentication logic in these rudimentary IoT devices to allow socialising.

Discussion

In this paper we have briefly looked at the security and privacy issues facing Internet of Things. We have described the four layers of IoT devices and mapped their security challenges. We find that IoT is still in a development stage with security challenges that need to be ironed out before the vision of truly smart devices and mass adoption of the technologies can succeed. Security and privacy are hampered by devices with little power to deal with the complex tasks of encryption and authentication.

It seems that most research base their ideas of the Internet and World Wide Web, where in fact, as many point out, the Internet of Things domain is more complex, since IoT devices are highly autonomous units with little power to make authentication or encryption. We have touched on another need for security, namely privacy of the collected data, so unauthorised third parties cannot gain access to the device and scrape the data for unauthorised use. This is however also a challenge for IoT, since devices are meant to communicate with the outside world and with each other. The question still remains open, as to who and how communication should be controlled.

References

  • [1] Suo, H., Wan, J., Zou, C., & Liu, J. “Security in the internet of things: a review”, Computer Science and Electronics Engineering (ICCSEE), 2012 International Conference on. Vol. 3. , 2012. IEEE
  • [2] Kermani, M. M., Zhang, M., Raghunathan, A., & Jha, N. K. “Emerging Frontiers in embedded security”, VLSI Design and 2013 12th International Conference on Embedded Systems (VLSID), 2013 26th International Conference on, 2013. IEEE
  • [3] Agrawal, S., & Das, M. L. “Internet of Things – A paradigm shift of future Internet applications”, Engineering (NUiCONE), 2011 Nirma University International Conference on, 2011. IEEE
  • [4] Keoh, S., Kumar, S. & Tschofenig, H. “Securing the Internet of Things: A Standardization Perspective”, , 2014.
  • [5] Jing, Q., Vasilakos, A. V., Wan, J., Lu, J., & Qiu, D. “Security of the Internet of Things: Perspectives and challenges”, , 2014.
  • [6] Mashhadi, A., Kawsar, F., & Acer, U. G. “Human Data Interaction in IoT: The ownership aspect”, Internet of Things (WF-IoT), 2014 IEEE World Forum on, 2014. IEEE
  • [7] Khan, R., Khan, S. U., Zaheer, R., & Khan, S. “Future Internet: the internet of things architecture, possible applications and key challenges”, Proceedings of the 2012 10th International Conference on Frontiers of Information Technology, 2012. IEEE Computer Society

 

Alan Turing  © Charis Tsevis https://www.flickr.com/photos/tsevis/ https://flic.kr/p/fiVEFG  © Charis Tsevis https://www.fli

Artificial Intelligence – a very short introduction

Definitions of Artificial Intelligence

Below we highlight four definitions of Artificial Intelligence (AI).

  • Artificial Intelligence is a discipline devoted to the simulations of human cognitive capabilities on the computer” (Rajaram, 1990).
  • Artificial Intelligence is a new science of researching theories, methods and technologies in simulating or developing thinking process of human beings” (Ling-fang, 2010).
  • Artificial Intelligence is an attempt to understand the substance of intelligence, and produce a new intelligent machine that could make reactions similar to the human intelligence (Ning and Yan, 2010).
  • “The capability of a device to perform functions that are normally associated with human intelligence, such as reasoning and manipulating factual and heuristic knowledge (Hosea, Harikrishnan and Rajkumar, 2011).

The field of Artificial Intelligence (AI) connects to with other science fields such as information theory, cybernetics, automation, bionics, biology, phycology, mathematical logic, linguistics, medicine and philosophy (Ning and Yan, 2010).

Hosea, Harikrishnan and Rajkumar (Hosea, Harikrishnan and Rajkumar, 2011) argue that a machine is truly AI if it solves certain classes of problems requiring intelligence in humans, or survives in an intellectually demanding environment. Following this, one could divide the definition into two parts, the epistemological part, that is, the real world representation of facts, and the heuristic part, where the facts help solve the problem through rules. The authors identify four requirements for a device to have in order to be said to have artificial intelligence, and highlight the advantages and disadvantages of Artificial Intelligence.

  • Requirements: Human Emotion; Create data associations to make decisions; Self-consciousness; and Creativity and Imagination.
  • Advantages of AI: No need for pauses or sleep; Rational or pre-programmed emotions could make for better decision-making; Easy to make multiple copies.
  • Disadvantages of AI: Limited sensory input compared to humans; Humans can deteriorate but still function, devices and applications quickly grind to a halt when minor faults set in.

AI is generally seen as an intelligent aid. Humans regard themselves as always making rational optimal choices. In that light intelligent computers will always try to find the correct medical diagnosis or try to win at a game. However, reality is more blurred. Humans can have hidden motives for loosing a game, perhaps to let a child build confidence or prescribe different medicine based on the patients attitude (Waltz, 2006).

Paradigms in Artificial Intelligence

Marvin Minsky

Marvin Minsky

AI evolves more around engineering and has no fixed theories or paradigms. Having said that, the main two paradigms to receive traction are J. B. Baars Global Workspace Theory from his 1988 book “A cognitive theory of consciousness” (Baars, 2005) and the agent-based model independently invented and championed by R. A. Brooks (Brooks, 1990), and Marvin Minsky, in his book “Society of Mind” from 1988 (Brunette, Flemmer and Flemmer, 2009).

Baars “Global Workspace Theory uses a theatre metaphor of a spotlight shining on one area (on the stage), but there is a lot going on behind the scene. Humans can complete and focus on a task, while many others things are going on at the same time.

Minsky: Believes that consciousness is made up of many smaller parts or agents, which collectively work together to produce intelligence.

Brooks: Builds cognition using a layered approach, where each layer can act upon or suppress input from layers below it.

History of Artificial Intelligence

C.E. Shannon.

C.E. Shannon.

The year 1956 and Dartmouth College are regarded as the birthdate and birthplace of AI, since this is the first time the phrase Artificial Intelligence is used. Many of the attendees (John McCarthy, Marvin Minsky, Claude Shannon, Nathan Rochester, Arthur Samual, Allen Newell, and Herbert Simon) become leaders within the field of AI and go on to open departments at MIT, Stanford, Edinburgh, and Carnegie Mellon University (Brunette, Flemmer and Flemmer, 2009).

However, Alan Turing’s Turing Test from 1950, captures the ideas of programming a digital computer to behave intelligently, and Strachey’s Checkers program from 1952 are also examples of intelligent computers (Hosea, Harikrishnan and Rajkumar, 2011), so too is Vannevar Bush’s “Memex”-concept from 1945 and “The Turk” from the eighteenth century (Buchanan, 2005).

Timeline of Artificial Intelligence

Professor John McCarthy

Professor John McCarthy

1950 – 1969: The 1950’s and 1960’s saw a rise in methodologies and applications for problem- solving pattern recognition and natural language processing. The programming language LISP was invented in 1960 by John McCarthy (Brunette, Flemmer and Flemmer, 2009). However these applications have trouble scaling to take on larger problems (Singh and Gupta, 2009). In 1969 the International Joint Conferences on Artificial Intelligence (IJCAI) is formed.

1970 – 1989: The 1970 and early 1980’s saw the rise of expert systems like Deep Blue, but also a dawning of the complexity of AI and the understanding that this was a lot more complicated than first thought. The programming language PROLOG is added to the AI stack, to be able to use logic to reason about a knowledge base. The late 1980’s saw the introduction of intelligent agents that react to their environment (Brunette, Flemmer and Flemmer, 2009).

Robotic hand holding a lightbulb.

Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing.

1990 – 1999: In the 1990’s, intelligent agents, robotics and embodied intelligence find it’s way into R&D projects, with the improvement of computing power, sensors and the underlying theory. Applications begin to focus on helping businesses and organisations. The late 1990’s sees connecting intelligent agents, and leads to the idea of Distributed Artificial Intelligence via the web.

2000 – present: A main focus is adding consciousness, human like behaviour and emotions to machines (Brunette, Flemmer and Flemmer, 2009). Another area of focus is machine learning, data mining, algorithms, and collective intelligence, due to the amount of unstructured available data on the web (and in databases), and the need to make sense of it (Singh and Gupta, 2009). AI also plays a major role in social sciences and Social Network Analysis (Ling-fang, 2010).

The future of Artificial Intelligence: Waltz (Waltz, 2006) predicts that the future of AI, for the next 20 years, will be determined by the interaction of three factors: Financial Factors (funding); Technical Factors (useful applications), Scientific Factors (intelligent progress), with a main focus on “cognitive prosthesis” and semantic applications, i.e. converging to a more industrial revolutionary outlook in helping humans complete tasks they dislike or do poorly. Research into the underlying theory will diminish. Funding will come from private companies like Google, Yahoo and Microsoft in collaboration with academia. NASA, the National Science Foundation (NSF) and other government bodies will not be willing to continue to fund AI research. Waltz identifies five areas that will thrive, namely:

and two other fields: AI theory and algorithms, and Turing Test AI, which Waltz regardeds as wildcard areas, since they can’t realistically produce practical results.

Concepts in Artificial Intelligence

Mosaic portrait of Alan Turing using the mathematical analysis used to decode the Enigma machines during the World War II.

Mosaic portrait of Alan Turing using the mathematical analysis used to decode the Enigma machines during the World War II.

Expert Systems (Expert AI): Expert systems rely on an inference engine and a knowledge base. The engine is often rule based (Rajaram, 1990). Expert Systems are used to assist in decisionmaking. Usage examples: Blood infection diagnostics and credit authorisation (Ling-fang, 2010).

Symbolic Mathematical Systems: Computer programs problem-solve using symbols instead of numbers (Rajaram, 1990).

Intelligent Communication Systems: Allows for communication between humans and machines (Rajaram, 1990).

Signal Based Systems: Signal based communication refers to input (vision and speech recognition) and output (visualisation and speech generation) (Rajaram, 1990).

Example of a Natural Language Processing application.

Example of a Natural Language Processing application.

Symbol Based Systems and Natural Language Processing: Symbol based communication refers to understanding natural language, i.e. semantics or reasoning about what is meant in a sentence (Rajaram, 1990). Currently this is an area that gets a lot of attention, due to the amount of data available on Social Media and on the web (Ling-fang, 2010)

Machine Learning: Machine-learning reasons about data by studying examples and using problem-solving and decision-making skills, rather than following a set of rules (Rajaram, 1990).

Logic-Based Learning Systems: Here the computer uses logic to reason about the input, i.e. if this and this and this is true, then that is true also (Rajaram, 1990).

Biological Analog Learning Systems: Computers built to resemble the biological system of the human body and brain (Rajaram, 1990).

Robotics: The goal is to create machines that can perform task for humans, not only in an industrial age type of way with continuous automation, but to intelligently analyse each step and take action depending on the task needed (Ling-fang, 2010).

The Asimo Robot

A robot is a mechanical or virtual artificial agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry. Robots can be autonomous, semi-autonomous or remotely controlled and range from humanoids such as ASIMO and TOPIO to nano robots, ‘swarm’ robots, and industrial robots. By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

References

  • Baars, J. B. (2005) Global workspace theory of consciousness: toward a cognitive neuroscience of human experience?”, Progress in Brain Research, Vol. 150, pp. 45 – 52.
  • Brooks, R. A. (1990) “Elephants Don’t Play Chess”, Robotics and Autonomous Systems Vol. 6, pp. 3 – 15.
  • Brunette, E. S., Flemmer, R. C. and Flemmer, C. L. (2009). “A Review of Artificial Intelligence. Proceedings of the 4th International Conference on Autonomous Robots and Agents., Wellington, New Zealand, p385-392.
  • Buchanan, B. G. (2005). “A (very) Brief History of Artificial Intelligence”, American Association for Artificial Intelligence – 25th anniversary issue, pp. 53 – 60.
  • Hosea, S., Harikrishnan, V. H.  and Rajkumar, K. (2011) “Artificial Intelligence”, 3rd International Conference on Electronics Computer Technology, Vol. 1, pp. 124 – 129.
  • Ling-fang, H. (2010) Artificial Intelligence, 2nd International Conference on Computer and Automation Engineering (ICCAE), Vol. 4, pp. 575 – 578.
  • Ning, S.,  Yan, M. (2010) “Discussion on Research and Development of Artificial Intelligence”, IEEE International Conference on Advanced Management Science(ICAMS 2010), Vol. 1 , pp. 110 – 112.
  • Rajaram, N. S. (1990) “Artificial Intelligence: A Technological Review”. ISA Transactions. Vol. 29 (1), pp 1 – 3.
  • Singh, V. K. and Gupta, A. K. (2009) “From Artificial to Collective Intelligence: Perspectives and Implications”, 5th International Symposium on Applied Computational Intelligence and Informatics, Timisoara, Romania, pp. 545 – 549.
  • Waltz, D. A. (2006) “Evolution, Sociobiology, and the Future of Artificial Intelligence”, IEEE Intelligent Systems, pp 66 – 69.
Et lavt men bredt "Decision tree" på en kaffemaskine i 7-Elleven. Når jeg først ved hvilken kaffe jeg vil ha, så skal jeg kun trykke et sted for at udføre opgaven. Problemet er skiltene med mer'salg som forvirrer processen.

Decision Tree (for coffee) in 7-Eleven

7-eleven decision tree coffee

A shallow but wide “Decision Tree” for a coffee machine in a 7-Eleven. When I have decided which coffee I want, I only have to press one button to complete the task. The problem is the many up-sale signs on the side of the machine which confuse the process.

Don Norman talks about the Gulf of Goals, where the user, whom we all bow to, has certain goals. It our job as creators of “the machine”, to make the Gulf as easy to cross as possible by striking a common ground with the user.

User obstacles – paths and trees

Dont wall in the user

I think this is a good example of how people will always try to find shortcuts, and as such you should not try to limit them in achieving their task. They want to get from point A to point B as quickly as possible.

In this case, there are two tracks in the park the run next to each other and then devide. When choosing one or the other path, it is not obvious, that one of the them will have no exits until you reach the end of the track. The other track, however, has many exits, and it looks like people got tired of not being able to make an exit.

Hole in the fence. Christianshavn, Copenhagen, Denmark.

Hole in the fence. Christianshavn, Copenhagen, Denmark.

Hole in the fence. Christianshavn, Copenhagen, Denmark.

Hole in the fence. Christianshavn, Copenhagen, Denmark.

Dont set up obstacles for the user

I saw this tree in Østeranlæg in Copenhagen and it’s kind of strange that the tree has engulfed the wire fence, which doesn’t look to be even remotely as old as the tree. How long has it been here?

 Tree engulfs wire fence. Østre anlæg, Copenhagen, Denmark.

Tree engulfs wire fence. Østre anlæg, Copenhagen, Denmark.

 Tree engulfs wire fence. Østre anlæg, Copenhagen, Denmark.

Tree engulfs wire fence. Østre anlæg, Copenhagen, Denmark.

 Tree engulfs wire fence. Østre anlæg, Copenhagen, Denmark.

Tree engulfs wire fence. Østre anlæg, Copenhagen, Denmark.