Monday evening I attended a local OWASP chapter meeting. The guest speaker was Jim Manico. Jim Manico is the person behind the OWASP Podcast and is an extremely knowledgable on the topic of security in conjunction with security for web development.
Jim started out with turning the whole problem of security around, instead of talking about top 10 threats, he wanted to present a set of top-10 defenses for web applications, emphasizing the importance of talking about remedies instead of threats.
The most important remedy is query parameterization (prepared statements with bound parameters), this addresses the number one threat and one of the most destructive go threats, also known as SQL-injection. Jim mentioned the OWASP Query Parameterization cheat sheet dedicated to this topic. The remedy is very basic, but the problem of SQL-injection does not seem to go away. On a note OWASP also has a cheat sheet dedicated to SQL-injection prevention.
Some basic examples of attack vectors could be:
Again Jim emphasized that this is hard to get and some of the more prominent and talented developers does not even see all these issues.
Jim mentioned a resource recommending: Element.setAttribute, which has several issues, due to its power.
Next up was CSRF, defenses mentioned were:
Jim spoke well for the randomized tokens model, where I tend to the double cookie submit model – perhaps a combination of all the defenses would be optimal. Also considering Jim himself mentioned theft of entropy based tokens using empty HTML form, so I do not see the randomized tokens model standing alone if it just have the least weakness in its implementation it is useless.
OWASP even has a cheat sheet.
Jim’s take on input validation surprised me a bit, but I could follow him. If you have your general defenses in place, input validation is actually more about data sanitization than security. Jim mentioned that handling your output is more important. Which leads me to something I have heard mentioned earlier, that you cannot trust your back-end either since you cannot be sure, how the data has been inserted/injected in the first place, so result sets for example from a database should be regarded with equal mistrust. The issue is really that the malicious data, does not do any harm until it reaches a browser again.
Jim mentioned a defense of using type strictness. I understand the concept, since it is often used in regular defensive programming and is not specific to security, but it does require developers using dynamic languages like me to think about implementing this sort of type checking as part of the application since it is not necessarily a part of the language itself – this does lead back to input validation and data sanitization, so this sort of defense does require strict handling of input data.
Jim gave some overall recommendations:
The later does require that you parse your HTML vigorously (see also: OWASP DOM XSS cheat sheet).
On the topic of parsing, the most widely spread way of deserializing JSON is by using eval. Do use JSON parser instead.
Next up was the largest of the topics Jim covered and I think one of the topics which interests him the most currently: Access Control Design Theory.
Jim grouped the attacks in this category into the following groups:
Jim stated that the access control component has to be introduced early, deeply and globally in your system. Implement a mapping of features, policies and a central point of access.
Vertical access control attacks are you regular attacks where privilege escalation is attempted obtained. Horizontal access control attacks I guess are related to ‘Insecure Direct Object References’, which is listed on the OWASP top-10. This is where the attacker is able to obtain access to data related to other entities, by exploitation of a role-system or the like.
Jim listed some anti-patterns, many of which I see on a regular basis:
A good example of handling of operations on valuable data, was Amazon.com. If you want to:
You have to re-authenticate.
One interesting problem related to the horizontal access control, which Jim shed some light on was the multiple tenants setup. This is the concept with hosted solutions or similar where you have multiple groups of users using the same system, but working on separate sets of data in a system.
I see the same challenge in regard to regular users vs. administrative users, this is an aspects with coincides very well with my ideas around use of public and private PaaS to separate deployments dedicated to regular use vs. administrative use.
Jim finished the presentation with a discussion on the problem area of passwords and the time had come to talk SSL. Jim stated that: SSL only gives us integrity, authenticity, confidentiality and repudiation. SSL is good, but SSL alone does not keep you safe. According to W3C the GET method leaks and you use it with consideration.
In general you have to formulate a set of password policies and Jim gave some recommendations:
User lock-out can be useful, but it can be straining on your resources, so you need to provide your users with ways to unlock just as you provide functionality for handling the case of forgotten passwords.
OWASP provides a forgotten password cheat sheet with more details on the topic of retrieval of forgotten passwords
I am in a position were I am designing an access control sub-system, so timing could not be better, so the evening was a blast and I am drowning in notes.
Stackato is a cloud solution from renowned ActiveState. It is based on the Open Source CloudFoundry and offers a serious cloud solution for Perl programmers, but also supports Python, Ruby, Node.js, PHP, Clojure and Java.
Stackato is very strong in the private PaaS area, but do also support as public PaaS and deployment onto Amazon’s EC2.
The presentation will cover basic use of Stackato and the reason for using a PaaS, public as private. Stackato can also be used as a micro-cloud for developers supporting vSphere, VMware Fusion, Parallels and VirtualBox.
Stackato is currently in public beta, but it is already quite impressive in both features and tools. Stackato is not Open Source, but CloudFoundry is and Stackato offers a magnificent platform for deployment of Open Source projects, sites and services.
ActiveState has committed to keeping the micro-cloud solution free so it offers an exciting capability and extension to the developers toolbox and toolchain.
More information will follow and the presentation will be made available online when it becomes available.
Following the keynote on day 2 of Internetdagarna was Dr. Matt Wood from Amazon. Matt Wood is a platform evangelist, working on the Amazon Web Services (AWS).
It did not take long from Matt Wood had started until twitter went crazy. People did not consider Matt’s talk a keynote, but merely a sales pitch. My take on this was somewhat divided, yes the talk was a sales pitch and as a keynote it failed, but at the same time the topic had a lot of professional interest to me.
I have decided to go over my notes here anyway even though I think Amazon did not understand the assignment of delivering a visionary keynote on cloud computing at an Internet conference, instead they did an 2.99 sales pitch, without capturing the majority of the audience.
Well disappointment aside and once more onto the pitch.
Matt stated that Amazon is a tech company, that happen to run a book store. All of their experience and expertise in running an international web based bookstore has been invested into their web service solutions.
AWS started by offering programmatic developer access (an API) to their commerce platform for accessing metadata.
In addition Amazon now offers a scalable infrastructure cloud solution named EC2 and a storage solution S3.
Matt focused on the EC2 part and the functional offering instead of the data and storage based offerings.
Matt presented an intriguing view on what problem it is that cloud computing solves. In traditional IT projects and software development it is the handling of infrastructure that inflicts the friction. The postulate by Amazon is that the infrastructure handling, they refer to this is heavy lifting is 70% and 30% is the actual development and where actual business values is added. The pitch from Amazon it that they want to maximize this.
Matt also stated that the cloud drives innovation, making the transition from idea to product easier and providing start-ups with essential leverage, so investment can be kept to an absolute minimum.
EC2 has a very low barrier for entry:
- it is access on-demand
- low-cost, where you pay as you go
- utility computing and utility infrastructure
- flexibility, lots of flexibility
An example was Animoto
Lots of issues remain. Matt Wood mentioned the shared responsibility model, which is used by Amazon to found a mutual responsibility for security aspects. Amazon have published two whitepapers on the topic. In regard to regulation Matt emphasized that in the AWS cloud data is local, data is not mirrored to US from Europe example.
I will hopefully write about cloud computing in the future since I am evaluating and experimenting with a micro cloud solution supporting Perl.
I had been following the tweets from day one of Internetdagarna tagged #ind11 and Yochai Benkler had given a talk entitled ‘Wikileaks and the future of the press’, which had been very well received, so it was with some expectations I sat down to listen to the keynote.
The keynote examined the concepts of innovation and open source as the primary motors of a new economy vs. the traditional industrial economy based on traditional industries.
Yochai emphasized some of the key aspects of these drivers, one of the terms he used was “imperfect rapidly learning system”. Much of what he mentioned seemed to have it’s roots in the agile movement or the other way around, so it was not completely new to me and if you are into software development methodology you could easily follow the patterns mentioned by Yochai.
Some of the classic examples mentioned by Yochai of open source success was the HTTP server. This market is still dominated by Apache, but also other open source candidates like nginx are showing on the graph (this is not the graph used by Yochai, but I think the data are from the same source and the tendency demonstrated is the same).
Another interesting aspect was based on IBM. IBM is the largest patent holder in the US, now make more money on Linux based activities that their traditional business of activities based on proprietary hardware and software.
Conclusion open source has become a serious factor.
Another interesting factor Yochai mentioned was the network aspect. The example was how Wikipedia has outcompeted Encarta from Microsoft, which historically outcompeted something like Encyclopedia Britannica distributed using dead wood. The network outcompeted the CDROM based distribution, which again outcompeted the book. Looking at the distribution chain and logistic differences in distributing the three, it is easy to spot why the first is the victor.
What the above example demonstrates is that innovation is the key in competition, but as Yochai states: Innovation as an industry is fundamentally different from traditional commodity based industries.
Yochai mentioned that historically innovation had earlier been on the side of the traditional industries. From there Yochai started talking about people and knowledge, making a point that innovation come from people. I understand how he made the connection, but I do not understand how you can dismiss traditional innovation, after all innovation has always been around, but I might have missed one of Yochai’s points.
Yochai then stated that knowledge is tacit and sticky and is transferred with people. Creativity cannot be controlled, which makes motivation of people an important parameter. Other aspects of this could however also be observed such as behavioral value shift where earlier peripheral activities are becoming core value and social aspects become a great motivator. Humans are pro-social beings hence humanization becomes an important factor.
Yochai started to talk about people vs. companies. His examples was of course taken from the USA. A funny thing he mentioned was in the comparison in legislation between California and some other states. He referred to this as the historical accident in California. Apparently the legislation in the state of California makes it easier for employees to change employer. What can be observed elsewhere is competition regulating laws and what I expect to be competition clauses and the like.
He presented a resource WIPO, which has an article entitled ‘Trade Secrets and Employee Loyalty’ stating employees are the biggest threat – hilarious, but yet scary taking into consideration Yochai’s claims that we are facing a paradigm shift in economy models, where innovation becomes the prime factor.
Yochai mentioned lots of interesting resources throughout his presentation, by the end of the talk, he came to the topic of start-ups, not with the focus on the idea of starting up, but focussing more on what it is that these start-ups do differently and why they succeed.
I noted the following:
- Sunlight Foundation working with open data and government transparency
- www.ushahidi.com mash-ups: violence maps in Kenya, wildfires in Russia and damage control in Haiti
- Skype/KaZaa, using open standards to innovate
His observations are that these new companies come from the edge, do something which is said cannot be done and it is not necessarily allowed due to the traditional way of protecting trade secrets and business models. One of his examples here was Apple’s Appstore where Google Voice and Skype was allowed after FCC leaned on Apple.
Yochai’s conclusion was that freedom required to do innovation in decentralized and open systems.
I am not sure but I do I hope I captured the essence of Yochais keynote.
In: Events22 Nov 2011
I attended the seminar with two sessions on certificates and SSL. These two presentations where however repeated on day 3 as part of the OWASP track, so I have decided to postpone the blog posts on these topics – revisiting the two talks most certainly did not hurt.
The last presentation I attended on day 1 was entitled ‘United Nations and Internet Governance’ and it was in English. This is one of those sessions, I would normally attend, but experience tells me that attending sessions, you would normally not consider often leading to surprising insights and interesting angles.
The IGF is a special organization working under the United Nations (UN), see the about page on the IGF website.
The IGF works on what can be considered global problem with the Internet. IGF is an open platform for a plethora of stakeholders to debate and discuss the Internet. The IGF does as such not hold any sort of mandate, but see to that concerns and issues are raised in the right fora and organizations. IGF differs from classical intergovernmental organizations and fora since it is based on a multi-stakeholder collaboration model.
An example given was the IDN issue, raised by many countries with alphabets ranging outside the 7-bit ASCII alphabet. The issue was raised in IGF and then solved via the proper stakeholders.
IGF was formed as an outcome of “World Summit of the Information Society” (WSIS) work in 2005 with a timeframe of 5 years. In 2009 UN extended this for 5 more years. If this is extended further is hard to predict, since some stakeholders are interested in more governmental influence and a closer binding to UN process and structure.
Markus mentioned the ‘Tunis Agenda’ as one of the important documents describing the IGF work and premises. IGF is described as a very extra-ordinary UN forum, in that sense it works outside with stakeholder outside the normal governmental sphere dominating the UN. Markus emphasized the importance of these stakeholders and the paradigm under , which IGF conducts it’s work, since non-governmental stakeholders provide a reality check, which is most needed when dealing with something as complex as the Internet.
Following Markus was Juuso Moisander, Juuso represents the Finnish government in IGF and EuroDIG. EuroDIG is the European branch of IGF. EuroDIG is having their next meeting in Stockholm, Sweden in January 2012.
Last speaker in this seminar was Nurani Nimpuno (@nnimpuno), Nurani is one of the many stakeholders playing an important role in the IGF. Nurani supplied Markus and mentioned the IBSA proposal (PDF), which is another important document produced in the context of IGF. The mentioned documents are interesting info if you are interested in getting more in depth with the work going on in IGF.
I do not feel my post is giving this particular seminar the depth and detail in deserves. The topic was quite interesting and Markus; Juuso, Nurani and the moderator Staffan Jonson provided excellent insights and descriptions of the workings of IGF, but this was very new territory to me, so I might now have caught sufficient detail and angles to capture all the facets of the IGF work, but I hope that this post can help to spark an interest in the work carried out by the IGF.
One funny thing I did pick up one thing and judging from this post and the seminar that is that the use of acronyms is most certainly not restricted to technical documentation and systems.
After two pretty heavy keynotes by Bruce Schneier and Lynn St Amour I decided to attend something less political and more technical, I am a technician after all. So I attended ‘App eller Web’ (App or Web).
The presentation was in swedish, but I am in that lucky position that I understand common swedish.
The first presenter: Andreas Sjöström (@AndreasSjostrom) did a presentation where he defined what an app was. Taking very much a web perspective he did a historical overview showing current marketing campaigns from different countries emphasizing on the app presence, much like what had been seen some years ago with URLs and web.
When it came to the decision on web or app, Andreas emphasized the need to be consequent in your choice and implementtation in particular, the hybrid solutions seen in many places are to be categorized as utter fails.
Andreas also mentioned another thing and that was the human relation to a smartphone compared to a laptop. His example was asking somebody about borrowing their laptop to access webmail. Most people would answer positively. But requesting the same thing, but with an iPhone would result in a negative response, He called this the toothbrush relation. Apparently we are more attached to our smartphones than to our laptops.
In a graph presented by Andreas with some statistics from IDG, it was stated that users browsing using smartphones would stay on pages for a longer than people browsing from computers. I have always found it puzzling how these numbers could be obtained for a asynchronous protocol, but what was more puzzling was the difference, which was significant. I have not been able to obtain the report, but I will try and add it if I succeed.
Andreas had only one request and that was: Making your current website work on mobile was the least effort you should invest.
Next up was Patrik Axelsson (@patrikaxelsson)
Patrik had a more technical approach. Doing some combinatorics over Android OS versions and requirements for resolution support you would end up with about 48 modes, which should be tested. Patrik referred to a scoreboard, which would assist in the decision making on going either native (app) or web. I am sorry I cannot refer to the scoreboard, if I am able to obtain it, it will be listed on the entry.
UPDATE: presentation from Slideshare see slides 23 and 25.
Further more Patrik recommended doing an analysis first and then pick the paradigme (app or web) based on what your requirements to the usability etc. would be. Then you would be able to make an informed decision – if the first question you ask is what method to use for implementation, you are doing it wrong.
Another recommendation from Patrik was very much in line with the presentation by Andreas. Do mobile web first, then find out what scenarios need special and additional treatment and might need a dedicated utility, perhaps implemented as an app – again make informed decisions.
Last presenter was Björn Hedensjö (@bjornhedensjo). Björn sort of followed up on the two previous presentation and I did not take any notes.
My reflections on the seminar was some that some important factors were missing. My thoughts on the topic are very much inline with the points made by Patrik.
The discussion was very much on distribution and usability. Where I think the debate lacked an aspect, which was raised at Internetdagen in Copenhagen. Here it was mentioned as one of the topics on what would influence the future of the Internet. One could argue that it is a tad vision-less, but it this context it made more sense: convenience.
The smartphone experience is very much one of convenience. So I think that when thinking app or web you have to defined what is convenient. The use of Internet resources can pretty much be hidden from the user taking AJAX for web, app solutions and especially cloud trends into consideration.
All in all an educational talk, with some good pragmatic pointers.
I had no prior knowledge of ISOC and it’s work and it was quite interesting to get an overview of an organization, which primary focus areas of focus are also important in my opinion.
Lynn gave en overview of the work done in ISOC. ISOC emphasizes a model based on distribution and collaboration. Some of the key issues have been emerging countries and economics.
The ISOC regards the Internet as an enabler for everyone. IPv6 is a high priority if we want to continue evolution of the Internet and we want to enable more countries and people.
Lynn focused very much on the multi stakeholder principle and the capabilities exposed when using such a principle. The philosophies dominating the ISOC work are based on collaboration, openness and democracy.
Lynn mentioned the importance of keeping the virtues on which the Internet was built alive. The Internet as we know it is based on open standards, it supports a large variety of business models, meaning diversity and innovation can thrive more freely.
The most challenging years are ahead and one of the bigger problems facing the Internet and the principles on, which it has been built is censorship. She mentioned DNS blocking, where we can observe the problem of technology used for censoring for non-technical problems. Lynn mentioned that 44 countries are doing aggressive filtering, a year ago it was 4.
I am not sure about the exact numbers, but the picture is pretty clear and the development is most concerning. Censorship monitoring services should be able to give more exact data on this development.
The continued work in ISOC and related organizations is of outmost important since what we have observed over the recent years is that the internet is a enabler for, development, economic growth and freedom of speech and expression. Lynn explicitly mentioned Article 19 from The Universal Declaration of Human Rights.
This is my first time attending Internetdagarna in Stockholm/Sweden. I attended our Danish equivalent in October, but had been informed that this was much bigger and with many more topics and tracks and I must admit that I am usually pretty busted after a technical conference, but day 1 of Internetdagarna did most certainly not disappoint and I am exhausted
after a long and very educational day. Here follows my notes and some reflections on the various talks I have attended.
I have divided the blog post in to the separate talks, since they became somewhat lengthy.
I started out with the security guru, Bruce Schneier (@bruce_schneier / @schneierblog), whom I have been following on twitter. It is always interesting with keynotes, since the speakers are often given free hands in their choice of topic and today Bruce Schneier did not disappoint.
Bruce had on a previous occasion participated in a panel in Washington DC. where the term “Cyberwar” and its relevance had been debated. Bruce had come to the conclusion that the term “Cyberwar” is not really well defined.
Bruce went over several examples on different uses of the term, both to emphasize the ambiguity of the term and to demonstrate it’s widespread use in varies communities, ranging from military, to security and to media.
He brought up some interesting characteristics of what is often referred to as “Cyberwar” and compared this to conventional use of the term war. At the same time he described some of the characteristics of some of the events, which are have been categorized as “Cyberwar” by various groups, without consensus or definition of terms in place.
Bruce did my no means undermine the danger of the events and activities related to what is often labelled as “Cyberwar”, but due to the vague definition, these events and activities get labeled “Cyberwar”, which makes it even more difficult for us to actually address what the problems and possible remedies would be. Bruce referred to this as: “cognitive confusion”.
Technology is spreading capability in a sense that we have not observed before. Compared to traditional weapons of war like tanks and airplanes, which are limited to governments and states. The weapons, which would play the primary role in a “cyberwar” are much more widely spread and more easily distributed and they carry no return address and there are no insignia or flags.
Also motivation of the attackers in these types of events are different. Some attacks might have their roots in what could also be the reasoning leading up to skirmishes or a war, whether this is based on belief, economics or culture. But these sorts of attacks are the same used by politically motivated activists or criminals. As Bruce describes it, on the Internet, attack is easier than defense.
I can comment here that the same observations where presented in AppSecEU in Dublin, which I also attended back in June (see also: blog posts from day 1 and day 2).
When we have problems defining who is attacking and why? The comparison to war no longer makes sense. Attacks might be government tolerated or even government sponsored, but we have no way of telling. The regular ways of handling nation state conflicts are via traditional government channels using diplomacy and treaties.
When the aggressor however does not match the above, it is a task for the judicial system. But the legal frameworks fall short, for the same reasons.
This lack of clarity makes the case for a shift of power in the US and a move towards more military jurisdiction over civilian jurisdiction. One of the basic arguments is based the fact that the military protect traditional infrastructure like power grid and water supply, so why should they not protect something as essential as an Internet backbone.
Bruce mentioned APT (Advanced Persistent Threat) as one of the attack vectors and where our security measures today, which he describes as relative to the attack vectors, APT requires a more absolute security due to the complexity of these sort of attack.
What I take away from the talk is that we need to update our perception and knowledge on these aspects of the Internet. Attacks are not going to go away and we need to be able to handle these threats in the using the proper means and categorization.
War is most certainly not the answer.
After a super day 1 at AppSecEU 2011, it was not hard to get out of bed and go to the conference, well a bit hard after a social event at a local pub where I mingled with other attendees including Janne Uusilehto who was to give the first keynote on day 2 and I promised him to be there…
Apart from the keynote by Janne Uusilehto, I had planned to attend the following sessions:
- An introduction to the OWASP Zed Attack Proxy by Simon Bennetts, OWASP
- New standards and upcoming technologies in browser security by Tobias Gondrom, IETF WG
- Six Key Application Security Program Metrics by Adrian Evans, Whitehat Security
- Keynote: Alex Lucas and Liam Cronin, Microsoft
- Putting the smart into Smartphones by Dan Carnell (@danielcornell), Denim Group
- Practical Crypto Attacks Against Web Applications by Justin Clarke (@connectjunkie), Gotham Digital Science
- Keynote: Ivan Ristic, Qualys
Unfortunately Ivan Ristic had lost his voice, so the talk: Six Key Application Security Program Metrics by Adrian Evans was moved to be the last keynote of the day.
I attended the keynote by Janne Uusilehto, but I did not take any notes: I was present and online, but in recovery mode. Janne’s talk was pretty much along the lines of Brad Arkin’s keynote on day 1. It was pretty high level stuff and I the organization I am working for and on a personal and professional level I am not really there yet. I am however really putting a lot of effort into deciding how our SDLC (Software Development Lifecycle) is going to be shaped. Many good points and pointers from Janne Uusilehto and just his listing of resources and organizations are painting a picture that the industry as a whole is coming together to define, implement and promote application security.
Next was an OWASP project presentation. The two OWASP project presentation had really provided much useful information so I did not hesitate with attending ‘An introduction to the OWASP Zed Attack Proxy’ by Simon Bennetts.
Simon Bennetts who has a background in software development made a very interesting point aimed at developers.
- You cannot build secure web applications unless you how to attack them
Unfortunately pentesting is a black art according to many developers. Simon Bennetts is however the lead on a a project, which is a tool aiming to help developers in doing pentesting.
The project is a (attack) proxy based on the Paros Proxy. It integrates with a lot of other OWASP tools like DirBuster and other projects are integrating with the Zed Attack Proxy.
Some of the interesting aspects will be the ability to run headless mode so the basic pentesting can be integrated with existing tool chains as part of automated test runs.
Next up was: New standards and upcoming technologies in browser security by Tobias Gondrom from the IETF WG.
The IETF is working hard to define new capabilities in the HTTP protocol to enhance the overall security, including:
- A new standard to unify the way browsers detect content-types, using sniffing based on standard algorithms to detect content type
- DNSSEC for TLS
- An X-Frame header to control use of iframes
- Content security policy, a feature where a policy-uri, a location to a policy file and a report-uri, so if policies are not adhered to the site gets a notification of the violation
- ‘do-not-track’. Will further be enforced by local legislation, this feature is only on the horizon however. This is quite interesting in relation to the new cookie rules in the EU and it is going to be interesting to see, how this is welcomed, since the technology does not defined the rules just the capability.
Tobias Gondrom mentioned the following resources: IETF WebSec and W3C App Sec, unfortunately I did not get the URLs noted down. Tobias Gondrom made the same mistake most of the presenters did at AppSecEU and that was ending with a slide with a question mark, instead of stopping it at the resources and references slide.
The next session was cancelled, but we got a somewhat late notice, so I just stayed in the room and coded some Perl.
After lunch the next keynote followed with Alex Lucas and Liam Cronin from Microsoft
Alex Lucas, started out with giving a brief historical overview of exploits from a Microsoft perspective. He then went on to describe the initiatives made at Microsoft. Again like Adobe and Nokia, these large corporations are really taking application security and security in general very seriously and the result of their work really seems to be paying off. Other corporations can really learn from these examples.
Alex Lucas listed some interesting points, which was very much in line with the Adobe initiatives presented by Brad Arkin, I can highlight the following:
- “Plan the work, work the plan”
- Updating is important and for Microsoft much more proactive
- Automation can be used to eliminate whole classes of exploits
- Automation can exchange human effort
- Let the tools do the job (compilers, tests, fuzzing, static analysis etc.)
- Security reviews as part of SDLC
As Alex Lucas formulated: “the defender must close all holes, if one exploit is used, the attacker has won”.
Following Alex Lucas was Liam Cronin, who gave a presentation of openness in Microsoft. Microsoft are going more open on many points, which is a good thing, Liam mentioned the following points:
- Substitutability, reduces lock-in
- Open Data: http://www.odata.org/
- Open Government Data Initiative
- Data formats must be non-propriety
- ODF and OpenXML
- Windows Azure (Microsoft cloud), open to PHP, Java and Ruby in addition to Microsoft’s own .Net technologies.
In general a good presentation and again it can be mentioned that Microsoft is participating in the different fora related to security, like http://www.safecode.org etc.
See also: http://www.microsoft.com/openness
The next talk I attended was: Putting the smart into Smartphones by Dan Carnell (@danielcornell).
Dan Carnell gave a very interesting presentation on Smartphone application development and pentesting.
One of the key points Dan Carnell made was: “Smartphones are always in a hostile environments”, or to rephrase: with mobile applications the code is on the device and devices get lost, stolen and change owner all the time, so attackers can reverse engineer applications and/or access data not deleted from the device.
The development recommendations from the talk was:
- Don’t store data
- Communicate securely
See also: http://smartphonesdumbapps.com/
The following talk was practical yet very abstract, but Justin Clarke (@connectjunkie) really gave an insightful presentation entitled Practical Crypto Attacks Against Web Applications.
My notes from this talk are somewhat cryptic themselves, so it is going to take some time to process them, but key points I can bring here are:
- If you are doing cryptography do not be afraid to consult an expert
This is actually good advice. Cryptography is hard and often we trust in cryptographic practices and implementation without really understanding them. Justing Clarke demonstrated some practical examples of how basic use of cryptography really did not provide much security. In addition Justin Clarke mentioened the following resources:
- Github: padBuster
- stack exchange is a good resource for cryptographic advice
Last but not least was the last keynote of the conference, as I mentioned earlier Ivan Ristic cancelled and instead Adrian Evans of Whitehat Security gave the last keynote:
Adrian Evans spoke really fast and his talk was very much aimed at security professionals. He did however mention a lot of interesting resources and he addressed the lack of general metrics of how we are doing as an industry.
One thing that struck me, which I had heard several times of the course of the conference was this trust in frameworks. At the GOTO Copenhagen developer conference, frameworks was really getting a serious beating for not being the best way of doing development. The gap between the security and development communities is still somewhat wide and I think that fora like OWASP really can help in trying to close this gap and get people to discuss the future of application security.
I am not really doing Adrian Evans much justice, since what he actually said was that frameworks are good for good for authentication but not for authorization, but the general positive view on frameworks at the conference has been quite clear.
Another point made by Adrian Evans was: “We cannot say ANYTHING about likelihood”. OWASP has big challenges ahead, which brings me to say that I really got a lot good information from this conference. I can highly recommend attending the conference in the future or at least joining your local chapter of OWASP.
Thanks to all attendees, organizers and sponsors – I had a great first AppSecEU conference.
This is the corporate blog of logicLAB. A software development company based in Copenhagen, Denmark