Again this year I am attending the GOTO Copenhagen Software Developer conference. From my prior experience I am to anticipate 3 days overloaded with information and I will try my best to get as much information propagated onto my blog as possible.

After the first welcome and initial introduction to the conference, the program started with a keynote on Googles Dart language by Kasper Lund from Google.

Kasper was one of the engineers behind Google’s V8 Javascript engine, now involved with the development of the Dart language.

The dart teams unofficial goal is to: ”awesome the web”. What the real goals behind the Dart project are unclear to me and I have not been following Dart development at all. Many new languages have been popping out of the woodwork over the last years and I find it hard to keep up with just the frameworks for the languages you work on on a daily basis. Anyway just being seated and fed with a break down of some of the key factors and features in a new language like Dart is always welcome and one of the aspects of going to conferences I really enjoy – so I just kicked back and listened to Kasper.

The evolvement and more serious use of Javascript might be one of the reasons why Google is developing Dart. Kasper mentioned the speed increase in Javascript execution in modern browsers (source: http://iq12.com/blog/as3-benchmark/) and Javascript popularity in general with projects like V8, Node.js, (based on V8) and jQuery.

Dart aims to be able to run on both server- and client-side. This is currently done on the client side either by compiling to Javascript or using the Dart virtual machine, which currently is only supported in Googles own browser Chrome (or Dartium). This part was a bit unclear to me and it took some time before I understood that both sides were targeted.

One of the reasons for Google developing might be the nature of Javascript. Javascript does not make it easy to write large applications for the web and Javascript has many issues. Kasper outlined some basic examples of Javascript behavior where the language does not seem completely consistent. In Kasper’s words Javascript is full of surprises, considering his background I pretty much take his words for it.

Examples given by Kasper was on data types and basic operations like addition etc. and out of bounds checking for arrays. Javascript is very forgiving and goes to great length to get a piece of code to execute. All of the issues addressed in Dart.

At the same time Dart seems to copy a lot of Javascript behavior. Kasper would probably be able to give even more concise examples and make a much more compelling case for choosing Dart over Javascript. Some of the key features mentioned by Kasper was, the client/server site support. The optional static type handling, which gives highly readable source code. The client-side integration is still fuzzy to me, but it seemed possible to integrated with existing Javascript and Dart is in some parts mimicking jQuery.

Dart supports incremental development and it seemed really easy to get started using the available toolchain, which includes:

Kasper also mentioned an interesting test method using a headless browser, where you inspect the DOM render tree. This seems to be doable by using some of the accessibility hooks in Chrome, but I am not sure and I have not dug further into the topic.

Dart is very intriguing and so much more that just: ”curly braces and then some…”

(cross posted from logiclab.jira.com)

Monday evening I attended a local OWASP chapter meeting. The guest speaker was Jim Manico. Jim Manico is the person behind the OWASP Podcast and is an extremely knowledgable on the topic of security in conjunction with security for web development.

Jim started out with turning the whole problem of security around, instead of talking about top 10 threats, he wanted to present a set of top-10 defenses for web applications, emphasizing the importance of talking about remedies instead of threats.

The most important remedy is query parameterization (prepared statements with bound parameters), this addresses the number one threat and one of the most destructive go threats, also known as SQL-injection. Jim mentioned the OWASP Query Parameterization cheat sheet dedicated to this topic. The remedy is very basic, but the problem of SQL-injection does not seem to go away. On a note OWASP also has a cheat sheet dedicated to SQL-injection prevention.

Next point Jim started listing a lot of threats, which had all had the common denominator: XSS. After this followed a nice matrix of defenses. Each of the different XSS threats would exploit different vectors/vulnerabilities in primarily Javascript, the matrix did however give a nice overview of what to look for and how to defend your application.

Some basic examples of attack vectors could be:

  • session hi-jacking/cookie theft: dokument.cookie and window.location
  • defacement: document.body.innerHTML

Again Jim emphasized that this is hard to get and some of the more prominent and talented developers does not even see all these issues.

Jim mentioned a resource recommending: Element.setAttribute, which has several issues, due to its power.

See also:

Next up was CSRF, defenses mentioned were:

  • double cookie submit
  • Single Origin Policy
  • randomized tokens

Jim spoke well for the randomized tokens model, where I tend to the double cookie submit model – perhaps a combination of all the defenses would be optimal. Also considering Jim himself mentioned theft of entropy based tokens using empty HTML form, so I do not see the randomized tokens model standing alone if it just have the least weakness in its implementation it is useless.

OWASP even has a cheat sheet.

Jim repeated over and over again that attacking is easy, defending is hard. An example was doing a key logger is as easy as listening for key-down/key-up events in Javascript.

Jim’s take on input validation surprised me a bit, but I could follow him. If you have your general defenses in place, input validation is actually more about data sanitization than security. Jim mentioned that handling your output is more important. Which leads me to something I have heard mentioned earlier, that you cannot trust your back-end either since you cannot be sure, how the data has been inserted/injected in the first place, so result sets for example from a database should be regarded with equal mistrust. The issue is really that the malicious data, does not do any harm until it reaches a browser again.

Jim mentioned a defense of using type strictness. I understand the concept, since it is often used in regular defensive programming and is not specific to security, but it does require developers using dynamic languages like me to think about implementing this sort of type checking as part of the application since it is not necessarily a part of the language itself – this does lead back to input validation and data sanitization, so this sort of defense does require strict handling of input data.

Jim gave some overall recommendations:

  • do not let users define style (CSS), allow only pre-defined themes
  • do not rely on client site defenses (javascript in browser)
  • be care full with WYSIWYG editors like tinyMCE these might use plain HTML

The later does require that you parse your HTML vigorously (see also: OWASP DOM XSS cheat sheet).

On the topic of parsing, the most widely spread way of deserializing JSON is by using eval. Do use JSON parser instead.

I often see a tendency in the security community to think highly of frameworks, but with all of these new and spiffy Javascript frameworks and their plugins I really hope some of the developers to take the time to evaluate and assert the security of these.

Next up was the largest of the topics Jim covered and I think one of the topics which interests him the most currently: Access Control Design Theory.

Jim grouped the attacks in this category into the following groups:

  • Vertical Access Control Attacks
  • Horizontal Access Control Attacks
  • Business Logic Access Control Attacks

Jim stated that the access control component has to be introduced early, deeply and globally in your system. Implement a mapping of features, policies and a central point of access.

Vertical access control attacks are you regular attacks where privilege escalation is attempted obtained. Horizontal access control attacks I guess are related to ‘Insecure Direct Object References’, which is listed on the OWASP top-10. This is where the attacker is able to obtain access to data related to other entities, by exploitation of a role-system or the like.

Jim listed some anti-patterns, many of which I see on a regular basis:

    • Hard-coded role checks in application code
    • Lack of centralized access control logic
    • Untrusted data driving access control decisions
    • Access control that is “open by default”
    • Lack of addressing horizontal access control in a standardized way (if at all)
    • Access control logic that needs to be manually added to every endpoint in code
    • Luckily Jim also presented some good patterns:
    • Code to the activity, not the role
    • Centralize access control logic
    • Design access control as a filter
    • Deny by default, fail securely
    • Build centralized access control mechanism
    • Apply same core logic to presentation and server-side access control decisions
    • Server-side trusted data should drive access control

A good example of handling of operations on valuable data, was Amazon.com. If you want to:

  • change credentials
  • changes to entity references, like email, mobile phone number etc.

You have to re-authenticate.

One interesting problem related to the horizontal access control, which Jim shed some light on was the multiple tenants setup. This is the concept with hosted solutions or similar where you have multiple groups of users using the same system, but working on separate sets of data in a system.

I see the same challenge in regard to regular users vs. administrative users, this is an aspects with coincides very well with my ideas around use of public and private PaaS to separate deployments dedicated to regular use vs. administrative use.

Jim finished the presentation with a discussion on the problem area of passwords and the time had come to talk SSL. Jim stated that: SSL only gives us integrity, authenticity, confidentiality and repudiation. SSL is good, but SSL alone does not keep you safe. According to W3C the GET method leaks and you use it with consideration.

In general you have to formulate a set of password policies and Jim gave some recommendations:

  • Allow 10-15 retries, not only 3-4, we are fighting machines not people
  • At 10 bad attempts, notify user
  • Utilize out of band channels: token/SMS/email
  • Avoid session fixation, anonymous vs. authenticated session, do not re-use session identifiers since the unauthenticated session might already have been hi-jacked, when the user authenticates
  • <input AUTOCOMPLETE=”Off”

User lock-out can be useful, but it can be straining on your resources, so you need to provide your users with ways to unlock just as you provide functionality for handling the case of forgotten passwords.

OWASP provides a forgotten password cheat sheet with more details on the topic of retrieval of forgotten passwords

I am in a position were I am designing an access control sub-system, so timing could not be better, so the evening was a blast and I am drowning in notes.

I will be giving a talk at Open Source Days 2012 in Copenhagen on ActiveStates cloud solution Stackato.

Stackato is a cloud solution from renowned ActiveState. It is based on the Open Source CloudFoundry and offers a serious cloud solution for Perl programmers, but also supports Python, Ruby, Node.js, PHP, Clojure and Java.

Stackato is very strong in the private PaaS area, but do also support as public PaaS and deployment onto Amazon’s EC2.

The presentation will cover basic use of Stackato and the reason for using a PaaS, public as private. Stackato can also be used as a micro-cloud for developers supporting vSphere, VMware Fusion, Parallels and VirtualBox.

Stackato is currently in public beta, but it is already quite impressive in both features and tools. Stackato is not Open Source, but CloudFoundry is and Stackato offers a magnificent platform for deployment of Open Source projects, sites and services.

ActiveState has committed to keeping the micro-cloud solution free so it offers an exciting capability and extension to the developers toolbox and toolchain.

More information will follow and the presentation will be made available online when it becomes available.

Following the keynote on day 2 of Internetdagarna was Dr. Matt Wood from Amazon. Matt Wood is a platform evangelist, working on the Amazon Web Services (AWS).

It did not take long from Matt Wood had started until twitter went crazy. People did not consider Matt’s talk a keynote, but merely a sales pitch. My take on this was somewhat divided, yes the talk was a sales pitch and as a keynote it failed, but at the same time the topic had a lot of professional interest to me.

I have decided to go over my notes here anyway even though I think Amazon did not understand the assignment of delivering a visionary keynote on cloud computing at an Internet conference, instead they did an 2.99 sales pitch, without capturing the majority of the audience.

Well disappointment aside and once more onto the pitch.

Matt stated that Amazon is a tech company, that happen to run a book store. All of their experience and expertise in running an international web based bookstore has been invested into their web service solutions.

AWS started by offering programmatic developer access (an API) to their commerce platform for accessing metadata.

In addition Amazon now offers a scalable infrastructure cloud solution named EC2 and a storage solution S3.

Matt focused on the EC2 part and the functional offering instead of the data and storage based offerings.

Matt presented an intriguing view on what problem it is that cloud computing solves. In traditional IT projects and software development it is the handling of infrastructure that inflicts the friction. The postulate by Amazon is that the infrastructure handling, they refer to this is heavy lifting is 70% and 30% is the actual development and where actual business values is added. The pitch from Amazon it that they want to maximize this.

Matt also stated that the cloud drives innovation, making the transition from idea to product easier and providing start-ups with essential leverage, so investment can be kept to an absolute minimum.

EC2 has a very low barrier for entry:

- it is access on-demand

- low-cost, where you pay as you go

- utility computing and utility infrastructure

- flexibility, lots of flexibility

An example was Animoto

Lots of issues remain. Matt Wood mentioned the shared responsibility model, which is used by Amazon to found a mutual responsibility for security aspects. Amazon have published two whitepapers on the topic. In regard to regulation Matt emphasized that in the AWS cloud data is local, data is not mirrored to US from Europe example.

I will hopefully write about cloud computing in the future since I am evaluating and experimenting with a micro cloud solution supporting Perl.

Day 2 of Internetdagarna started with two keynotes. The first speaker was Yochai Benkler from the Berkman centre at Harvard University.

I had been following the tweets from day one of Internetdagarna tagged #ind11 and Yochai Benkler had given a talk entitled ‘Wikileaks and the future of the press’, which had been very well received, so it was with some expectations I sat down to listen to the keynote.

The keynote examined the concepts of innovation and open source as the primary motors of a new economy vs. the traditional industrial economy based on traditional industries.

Yochai emphasized some of the key aspects of these drivers, one of the terms he used was “imperfect rapidly learning system”. Much of what he mentioned seemed to have it’s roots in the agile movement or the other way around, so it was not completely new to me and if you are into software development methodology you could easily follow the patterns mentioned by Yochai.

Some of the classic examples mentioned by Yochai of open source success was the HTTP server. This market is still dominated by Apache, but also other open source candidates like nginx are showing on the graph (this is not the graph used by Yochai, but I think the data are from the same source and the tendency demonstrated is the same).

Another interesting aspect was based on IBM. IBM is the largest patent holder in the US, now make more money on Linux based activities that their traditional business of activities based on proprietary hardware and software.

Conclusion open source has become a serious factor.

Another interesting factor Yochai mentioned was the network aspect. The example was how Wikipedia has outcompeted Encarta from Microsoft, which historically outcompeted something like Encyclopedia Britannica distributed using dead wood. The network outcompeted the CDROM based distribution, which again outcompeted the book. Looking at the distribution chain and logistic differences in distributing the three, it is easy to spot why the first is the victor.

What the above example demonstrates is that innovation is the key in competition, but as Yochai states: Innovation as an industry is fundamentally different from traditional commodity based industries.

Yochai mentioned that historically innovation had earlier been on the side of the traditional industries. From there Yochai started talking about people and knowledge, making a point that innovation come from people. I understand how he made the connection, but I do not understand how you can dismiss traditional innovation, after all innovation has always been around, but I might have missed one of Yochai’s points.

Yochai then stated that knowledge is tacit and sticky and is transferred with people. Creativity cannot be controlled, which makes motivation of people an important parameter. Other aspects of this could however also be observed such as behavioral value shift where earlier peripheral activities are becoming core value and social aspects become a great motivator. Humans are pro-social beings hence humanization becomes an important factor.

Yochai started to talk about people vs. companies. His examples was of course taken from the USA. A funny thing he mentioned was in the comparison in legislation between California and some other states. He referred to this as the historical accident in California. Apparently the legislation in the state of California makes it easier for employees to change employer. What can be observed elsewhere is competition regulating laws and what I expect to be competition clauses and the like.

He presented a resource WIPO, which has an article entitled ‘Trade Secrets and Employee Loyalty’ stating employees are the biggest threat – hilarious, but yet scary taking into consideration Yochai’s claims that we are facing a paradigm shift in economy models, where innovation becomes the prime factor.

Yochai mentioned lots of interesting resources throughout his presentation, by the end of the talk, he came to the topic of start-ups, not with the focus on the idea of starting up, but focussing more on what it is that these start-ups do differently and why they succeed.

I noted the following:

- Sunlight Foundation working with open data and government transparency

- www.ushahidi.com mash-ups: violence maps in Kenya, wildfires in Russia and damage control in Haiti

- Skype/KaZaa, using open standards to innovate

His observations are that these new companies come from the edge, do something which is said cannot be done and it is not necessarily allowed due to the traditional way of protecting trade secrets and business models. One of his examples here was Apple’s Appstore where Google Voice and Skype was allowed after FCC leaned on Apple.

Yochai’s conclusion was that freedom required to do innovation in decentralized and open systems.

I am not sure but I do I hope I captured the essence of Yochais keynote.

I attended the seminar with two sessions on certificates and SSL. These two presentations where however repeated on day 3 as part of the OWASP track, so I have decided to postpone the blog posts on these topics – revisiting the two talks most certainly did not hurt.

The last presentation I attended on day 1 was entitled ‘United Nations and Internet Governance’ and it was in English. This is one of those sessions, I would normally attend, but experience tells me that attending sessions, you would normally not consider often leading to surprising insights and interesting angles.

The session started out with Markus Kummer the (former vice president for ISOC) giving an overview of the “Internet Governance Forum” (IGF).

The IGF is a special organization working under the United Nations (UN), see the about page on the IGF website.

The IGF works on what can be considered global problem with the Internet. IGF is an open platform for a plethora of stakeholders to debate and discuss the Internet. The IGF does as such not hold any sort of mandate, but see to that concerns and issues are raised in the right fora and organizations. IGF differs from classical intergovernmental organizations and fora since it is based on a multi-stakeholder collaboration model.

An example given was the IDN issue, raised by many countries with alphabets ranging outside the 7-bit ASCII alphabet. The issue was raised in IGF and then solved via the proper stakeholders.

IGF was formed as an outcome of “World Summit of the Information Society” (WSIS) work in 2005 with a timeframe of 5 years. In 2009 UN extended this for 5 more years. If this is extended further is hard to predict, since some stakeholders are interested in more governmental influence and a closer binding to UN process and structure.

Markus mentioned the ‘Tunis Agenda’ as one of the important documents describing the IGF work and premises. IGF is described as a very extra-ordinary UN forum, in that sense it works outside with stakeholder outside the normal governmental sphere dominating the UN. Markus emphasized the importance of these stakeholders and the paradigm under , which IGF conducts it’s work, since non-governmental stakeholders provide a reality check, which is most needed when dealing with something as complex as the Internet.

Following Markus was Juuso Moisander, Juuso represents the Finnish government in IGF and EuroDIG. EuroDIG is the European branch of IGF. EuroDIG is having their next meeting in Stockholm, Sweden in January 2012.

Last speaker in this seminar was Nurani Nimpuno (@nnimpuno), Nurani is one of the many stakeholders playing an important role in the IGF. Nurani supplied Markus and mentioned the IBSA proposal (PDF), which is another important document produced in the context of IGF. The mentioned documents are interesting info if you are interested in getting more in depth with the work going on in IGF.

I do not feel my post is giving this particular seminar the depth and detail in deserves. The topic was quite interesting and Markus; Juuso, Nurani and the moderator Staffan Jonson provided excellent insights and descriptions of the workings of IGF, but this was very new territory to me, so I might now have caught sufficient detail and angles to capture all the facets of the IGF work, but I hope that this post can help to spark an interest in the work carried out by the IGF.

One funny thing I did pick up one thing and judging from this post and the seminar that is that the use of acronyms is most certainly not restricted to technical documentation and systems.

After two pretty heavy keynotes by Bruce Schneier and Lynn St Amour I decided to attend something less political and more technical, I am a technician after all. So I attended ‘App eller Web’ (App or Web).

The presentation was in swedish, but I am in that lucky position that I understand common swedish.

The first presenter: Andreas Sjöström (@AndreasSjostrom) did a presentation where he defined what an app was. Taking very much a web perspective he did a historical overview showing current marketing campaigns from different countries emphasizing on the app presence, much like what had been seen some years ago with URLs and web.

When it came to the decision on web or app, Andreas emphasized the need to be consequent in your choice and implementtation in particular, the hybrid solutions seen in many places are to be categorized as utter fails.

Andreas also mentioned another thing and that was the human relation to a smartphone compared to a laptop. His example was asking somebody about borrowing their laptop to access webmail. Most people would answer positively. But requesting the same thing, but with an iPhone would result in a negative response, He called this the toothbrush relation. Apparently we are more attached to our smartphones than to our laptops.

In a graph presented by Andreas with some statistics from IDG, it was stated that users browsing using smartphones would stay on pages for a longer than people browsing from computers. I have always found it puzzling how these numbers could be obtained for a asynchronous protocol, but what was more puzzling was the difference, which was significant. I have not been able to obtain the report, but I will try and add it if I succeed.

Andreas had only one request and that was: Making your current website work on mobile was the least effort you should invest.

Next up was Patrik Axelsson (@patrikaxelsson)

Patrik had a more technical approach. Doing some combinatorics over Android OS versions and requirements for resolution support you would end up with about 48 modes, which should be tested. Patrik referred to a scoreboard, which would assist in the decision making on going either native (app) or web. I am sorry I cannot refer to the scoreboard, if I am able to obtain it, it will be listed on the entry.

UPDATE: presentation from Slideshare see slides 23 and 25.

Further more Patrik recommended doing an analysis first and then pick the paradigme (app or web) based on what your requirements to the usability etc. would be. Then you would be able to make an informed decision – if the first question you ask is what method to use for implementation, you are doing it wrong.

Another recommendation from Patrik was very much in line with the presentation by Andreas. Do mobile web first, then find out what scenarios need special and additional treatment and might need a dedicated utility, perhaps implemented as an app – again make informed decisions.

Last presenter was Björn Hedensjö (@bjornhedensjo). Björn sort of followed up on the two previous presentation and I did not take any notes.

My reflections on the seminar was some that some important factors were missing. My thoughts on the topic are very much inline with the points made by Patrik.

The discussion was very much on distribution and usability. Where I think the debate lacked an aspect, which was raised at Internetdagen in Copenhagen. Here it was mentioned as one of the topics on what would influence the future of the Internet. One could argue that it is a tad vision-less, but it this context it made more sense: convenience.

The smartphone experience is very much one of convenience. So I think that when thinking app or web you have to defined what is convenient. The use of Internet resources can pretty much be hidden from the user taking AJAX for web, app solutions and especially cloud trends into consideration.

All in all an educational talk, with some good pragmatic pointers.

The keynote by Bruce Schneier was followed by a keynote by Lynn St. Amour (@lynnstamour) from the Internet Society (ISOC),

I had no prior knowledge of ISOC and it’s work and it was quite interesting to get an overview of an organization, which primary focus areas of focus are also important in my opinion.

Lynn gave en overview of the work done in ISOC. ISOC emphasizes a model based on distribution and collaboration. Some of the key issues have been emerging countries and economics.

The ISOC regards the Internet as an enabler for everyone. IPv6 is a high priority if we want to continue evolution of the Internet and we want to enable more countries and people.

Lynn focused very much on the multi stakeholder principle and the capabilities exposed when using such a principle. The philosophies dominating the ISOC work are based on collaboration, openness and democracy.

Lynn mentioned the importance of keeping the virtues on which the Internet was built alive. The Internet as we know it is based on open standards, it supports a large variety of business models, meaning diversity and innovation can thrive more freely.

The most challenging years are ahead and one of the bigger problems facing the Internet and the principles on, which it has been built is censorship. She mentioned DNS blocking, where we can observe the problem of technology used for censoring for non-technical problems. Lynn mentioned that 44 countries are doing aggressive filtering, a year ago it was 4.

I am not sure about the exact numbers, but the picture is pretty clear and the development is most concerning. Censorship monitoring services should be able to give more exact data on this development.

The continued work in ISOC and related organizations is of outmost important since what we have observed over the recent years is that the internet is a enabler for, development, economic growth and freedom of speech and expression. Lynn explicitly mentioned Article 19 from The Universal Declaration of Human Rights.

This is my first time attending Internetdagarna in Stockholm/Sweden. I attended our Danish equivalent in October, but had been informed that this was much bigger and with many more topics and tracks and I must admit that I am usually pretty busted after a technical conference, but day 1 of Internetdagarna did most certainly not disappoint and I am exhausted

after a long and very educational day. Here follows my notes and some reflections on the various talks I have attended.

I have divided the blog post in to the separate talks, since they became somewhat lengthy.

I started out with the security guru, Bruce Schneier (@bruce_schneier / @schneierblog), whom I have been following on twitter. It is always interesting with keynotes, since the speakers are often given free hands in their choice of topic and today Bruce Schneier did not disappoint.

Bruce had on a previous occasion participated in a panel in Washington DC. where the term “Cyberwar” and its relevance had been debated. Bruce had come to the conclusion that the term “Cyberwar” is not really well defined.

Bruce went over several examples on different uses of the term, both to emphasize the ambiguity of the term and to demonstrate it’s widespread use in varies communities, ranging from military, to security and to media.

He brought up some interesting characteristics of what is often referred to as “Cyberwar” and compared this to conventional use of the term war. At the same time he described some of the characteristics of some of the events, which are have been categorized as “Cyberwar” by various groups, without consensus or definition of terms in place.

Bruce did my no means undermine the danger of the events and activities related to what is often labelled as “Cyberwar”, but due to the vague definition, these events and activities get labeled “Cyberwar”, which makes it even more difficult for us to actually address what the problems and possible remedies would be. Bruce referred to this as: “cognitive confusion”.

Technology is spreading capability in a sense that we have not observed before. Compared to traditional weapons of war like tanks and airplanes, which are limited to governments and states. The weapons, which would play the primary role in a “cyberwar” are much more widely spread and more easily distributed and they carry no return address and there are no insignia or flags.

Also motivation of the attackers in these types of events are different. Some attacks might have their roots in what could also be the reasoning leading up to skirmishes or a war, whether this is based on belief, economics or culture. But these sorts of attacks are the same used by politically motivated activists or criminals. As Bruce describes it, on the Internet, attack is easier than defense.

I can comment here that the same observations where presented in AppSecEU in Dublin, which I also attended back in June (see also: blog posts from day 1 and day 2).

When we have problems defining who is attacking and why? The comparison to war no longer makes sense. Attacks might be government tolerated or even government sponsored, but we have no way of telling. The regular ways of handling nation state conflicts are via traditional government channels using diplomacy and treaties.

When the aggressor however does not match the above, it is a task for the judicial system. But the legal frameworks fall short, for the same reasons.

This lack of clarity makes the case for a shift of power in the US and a move towards more military jurisdiction over civilian jurisdiction. One of the basic arguments is based the fact that the military protect traditional infrastructure like power grid and water supply, so why should they not protect something as essential as an Internet backbone.

Bruce mentioned APT (Advanced Persistent Threat) as one of the attack vectors and where our security measures today, which he describes as relative to the attack vectors, APT requires a more absolute security due to the complexity of these sort of attack.

What I take away from the talk is that we need to update our perception and knowledge on these aspects of the Internet. Attacks are not going to go away and we need to be able to handle these threats in the using the proper means and categorization.

War is most certainly not the answer.

Internetdagen 2011

In: Events

26 Oct 2011

Internetdagen 2011 is an event arranged by DIFO. It is the second time DIFO arranges the event, it was however the first time I have participated. I had been asked to be the moderator of a talk on the feasibility of a kill-switch in Denmark.

The whole concept of Internetdagen is very intriguing, a conference where you can cover a variety of aspects of the Internet as a medium and phenomenon, without it having a primary focus on technical details, businesses, IT or some other specialized topic. You can shop around between talks on topics ranging from IPv6, Social Networks, neuro biology a.s.o.

The conference took place in the new Tivoli Congress Center and it set a fantastic frame for a conference with a futuristic touch. The venue was huge and this unfortunately has the effect of demonstrating the lack of participants.

Internetdagen is inspired by the conference Internetdagarna in Sweden and even though it is on a minor scale, the potential and importance of such a conference should not be disregarded.

I have my hopes that the conference will be repeated next year and hopefully the number of participants will increase. The conference is very interesting and will hopefully continue to be so, but it is important that people participate. The Internet is here to stay and it is of outmost important that we reflect and debate the impact of this medium on our society. So in order to facilitate a broad debate we need all the people and opinions we can muster. The impact of the Internet today and it’s constant redefinition of itself as a medium, emphasizes the importance for a forum where we can meet, debate, educate and be educated.

One of the very interesting talks was a talk on how the use of social media are changing our brain. The use of search engines and remembering by reference, might impact on how we use our memory, instead of remembering facts we remember how to find the facts.

Also the use of interruptive social media was problematized. Research has already demonstrated that the human brain is unable to multi-task, without influence on the solutions provided during multi-tasking. The nature and interruptive nature of social media strengthens this problem. The research in the field is sparse and quite new and it is only providing data and it is unconclusive at this time. Changes to the brain have been observed to be taking place, but whether this is for the better or for the worse is not possible to say.

The research , so it is going to be interesting to follow this research in the future.

Hope to see you at Internetdagen sometime in the future…

I have collected these resources (in Danish) on Internetdagen:

If you are a twitter user you can also visit the lanyard site for information. If you come by any resources, please feel free to let me know of these.

About this blog

This is the corporate blog of logicLAB. A software development company based in Copenhagen, Denmark

September 2014
M T W T F S S
« Mar    
1234567
891011121314
15161718192021
22232425262728
2930