Stackato is a cloud solution from renowned ActiveState. It is based on the Open Source CloudFoundry and offers a serious cloud solution for Perl programmers, but also supports Python, Ruby, Node.js, PHP, Clojure and Java.
Stackato is very strong in the private PaaS area, but do also support as public PaaS and deployment onto Amazon’s EC2.
The presentation will cover basic use of Stackato and the reason for using a PaaS, public as private. Stackato can also be used as a micro-cloud for developers supporting vSphere, VMware Fusion, Parallels and VirtualBox.
Stackato is currently in public beta, but it is already quite impressive in both features and tools. Stackato is not Open Source, but CloudFoundry is and Stackato offers a magnificent platform for deployment of Open Source projects, sites and services.
ActiveState has committed to keeping the micro-cloud solution free so it offers an exciting capability and extension to the developers toolbox and toolchain.
More information will follow and the presentation will be made available online when it becomes available.
Following the keynote on day 2 of Internetdagarna was Dr. Matt Wood from Amazon. Matt Wood is a platform evangelist, working on the Amazon Web Services (AWS).
It did not take long from Matt Wood had started until twitter went crazy. People did not consider Matt’s talk a keynote, but merely a sales pitch. My take on this was somewhat divided, yes the talk was a sales pitch and as a keynote it failed, but at the same time the topic had a lot of professional interest to me.
I have decided to go over my notes here anyway even though I think Amazon did not understand the assignment of delivering a visionary keynote on cloud computing at an Internet conference, instead they did an 2.99 sales pitch, without capturing the majority of the audience.
Well disappointment aside and once more onto the pitch.
Matt stated that Amazon is a tech company, that happen to run a book store. All of their experience and expertise in running an international web based bookstore has been invested into their web service solutions.
AWS started by offering programmatic developer access (an API) to their commerce platform for accessing metadata.
In addition Amazon now offers a scalable infrastructure cloud solution named EC2 and a storage solution S3.
Matt focused on the EC2 part and the functional offering instead of the data and storage based offerings.
Matt presented an intriguing view on what problem it is that cloud computing solves. In traditional IT projects and software development it is the handling of infrastructure that inflicts the friction. The postulate by Amazon is that the infrastructure handling, they refer to this is heavy lifting is 70% and 30% is the actual development and where actual business values is added. The pitch from Amazon it that they want to maximize this.
Matt also stated that the cloud drives innovation, making the transition from idea to product easier and providing start-ups with essential leverage, so investment can be kept to an absolute minimum.
EC2 has a very low barrier for entry:
- it is access on-demand
- low-cost, where you pay as you go
- utility computing and utility infrastructure
- flexibility, lots of flexibility
An example was Animoto
Lots of issues remain. Matt Wood mentioned the shared responsibility model, which is used by Amazon to found a mutual responsibility for security aspects. Amazon have published two whitepapers on the topic. In regard to regulation Matt emphasized that in the AWS cloud data is local, data is not mirrored to US from Europe example.
I will hopefully write about cloud computing in the future since I am evaluating and experimenting with a micro cloud solution supporting Perl.
I had been following the tweets from day one of Internetdagarna tagged #ind11 and Yochai Benkler had given a talk entitled ‘Wikileaks and the future of the press’, which had been very well received, so it was with some expectations I sat down to listen to the keynote.
The keynote examined the concepts of innovation and open source as the primary motors of a new economy vs. the traditional industrial economy based on traditional industries.
Yochai emphasized some of the key aspects of these drivers, one of the terms he used was “imperfect rapidly learning system”. Much of what he mentioned seemed to have it’s roots in the agile movement or the other way around, so it was not completely new to me and if you are into software development methodology you could easily follow the patterns mentioned by Yochai.
Some of the classic examples mentioned by Yochai of open source success was the HTTP server. This market is still dominated by Apache, but also other open source candidates like nginx are showing on the graph (this is not the graph used by Yochai, but I think the data are from the same source and the tendency demonstrated is the same).
Another interesting aspect was based on IBM. IBM is the largest patent holder in the US, now make more money on Linux based activities that their traditional business of activities based on proprietary hardware and software.
Conclusion open source has become a serious factor.
Another interesting factor Yochai mentioned was the network aspect. The example was how Wikipedia has outcompeted Encarta from Microsoft, which historically outcompeted something like Encyclopedia Britannica distributed using dead wood. The network outcompeted the CDROM based distribution, which again outcompeted the book. Looking at the distribution chain and logistic differences in distributing the three, it is easy to spot why the first is the victor.
What the above example demonstrates is that innovation is the key in competition, but as Yochai states: Innovation as an industry is fundamentally different from traditional commodity based industries.
Yochai mentioned that historically innovation had earlier been on the side of the traditional industries. From there Yochai started talking about people and knowledge, making a point that innovation come from people. I understand how he made the connection, but I do not understand how you can dismiss traditional innovation, after all innovation has always been around, but I might have missed one of Yochai’s points.
Yochai then stated that knowledge is tacit and sticky and is transferred with people. Creativity cannot be controlled, which makes motivation of people an important parameter. Other aspects of this could however also be observed such as behavioral value shift where earlier peripheral activities are becoming core value and social aspects become a great motivator. Humans are pro-social beings hence humanization becomes an important factor.
Yochai started to talk about people vs. companies. His examples was of course taken from the USA. A funny thing he mentioned was in the comparison in legislation between California and some other states. He referred to this as the historical accident in California. Apparently the legislation in the state of California makes it easier for employees to change employer. What can be observed elsewhere is competition regulating laws and what I expect to be competition clauses and the like.
He presented a resource WIPO, which has an article entitled ‘Trade Secrets and Employee Loyalty’ stating employees are the biggest threat – hilarious, but yet scary taking into consideration Yochai’s claims that we are facing a paradigm shift in economy models, where innovation becomes the prime factor.
Yochai mentioned lots of interesting resources throughout his presentation, by the end of the talk, he came to the topic of start-ups, not with the focus on the idea of starting up, but focussing more on what it is that these start-ups do differently and why they succeed.
I noted the following:
- Sunlight Foundation working with open data and government transparency
- www.ushahidi.com mash-ups: violence maps in Kenya, wildfires in Russia and damage control in Haiti
- Skype/KaZaa, using open standards to innovate
His observations are that these new companies come from the edge, do something which is said cannot be done and it is not necessarily allowed due to the traditional way of protecting trade secrets and business models. One of his examples here was Apple’s Appstore where Google Voice and Skype was allowed after FCC leaned on Apple.
Yochai’s conclusion was that freedom required to do innovation in decentralized and open systems.
I am not sure but I do I hope I captured the essence of Yochais keynote.
In: Events22 Nov 2011
I attended the seminar with two sessions on certificates and SSL. These two presentations where however repeated on day 3 as part of the OWASP track, so I have decided to postpone the blog posts on these topics – revisiting the two talks most certainly did not hurt.
The last presentation I attended on day 1 was entitled ‘United Nations and Internet Governance’ and it was in English. This is one of those sessions, I would normally attend, but experience tells me that attending sessions, you would normally not consider often leading to surprising insights and interesting angles.
The IGF is a special organization working under the United Nations (UN), see the about page on the IGF website.
The IGF works on what can be considered global problem with the Internet. IGF is an open platform for a plethora of stakeholders to debate and discuss the Internet. The IGF does as such not hold any sort of mandate, but see to that concerns and issues are raised in the right fora and organizations. IGF differs from classical intergovernmental organizations and fora since it is based on a multi-stakeholder collaboration model.
An example given was the IDN issue, raised by many countries with alphabets ranging outside the 7-bit ASCII alphabet. The issue was raised in IGF and then solved via the proper stakeholders.
IGF was formed as an outcome of “World Summit of the Information Society” (WSIS) work in 2005 with a timeframe of 5 years. In 2009 UN extended this for 5 more years. If this is extended further is hard to predict, since some stakeholders are interested in more governmental influence and a closer binding to UN process and structure.
Markus mentioned the ‘Tunis Agenda’ as one of the important documents describing the IGF work and premises. IGF is described as a very extra-ordinary UN forum, in that sense it works outside with stakeholder outside the normal governmental sphere dominating the UN. Markus emphasized the importance of these stakeholders and the paradigm under , which IGF conducts it’s work, since non-governmental stakeholders provide a reality check, which is most needed when dealing with something as complex as the Internet.
Following Markus was Juuso Moisander, Juuso represents the Finnish government in IGF and EuroDIG. EuroDIG is the European branch of IGF. EuroDIG is having their next meeting in Stockholm, Sweden in January 2012.
Last speaker in this seminar was Nurani Nimpuno (@nnimpuno), Nurani is one of the many stakeholders playing an important role in the IGF. Nurani supplied Markus and mentioned the IBSA proposal (PDF), which is another important document produced in the context of IGF. The mentioned documents are interesting info if you are interested in getting more in depth with the work going on in IGF.
I do not feel my post is giving this particular seminar the depth and detail in deserves. The topic was quite interesting and Markus; Juuso, Nurani and the moderator Staffan Jonson provided excellent insights and descriptions of the workings of IGF, but this was very new territory to me, so I might now have caught sufficient detail and angles to capture all the facets of the IGF work, but I hope that this post can help to spark an interest in the work carried out by the IGF.
One funny thing I did pick up one thing and judging from this post and the seminar that is that the use of acronyms is most certainly not restricted to technical documentation and systems.
After two pretty heavy keynotes by Bruce Schneier and Lynn St Amour I decided to attend something less political and more technical, I am a technician after all. So I attended ‘App eller Web’ (App or Web).
The presentation was in swedish, but I am in that lucky position that I understand common swedish.
The first presenter: Andreas Sjöström (@AndreasSjostrom) did a presentation where he defined what an app was. Taking very much a web perspective he did a historical overview showing current marketing campaigns from different countries emphasizing on the app presence, much like what had been seen some years ago with URLs and web.
When it came to the decision on web or app, Andreas emphasized the need to be consequent in your choice and implementtation in particular, the hybrid solutions seen in many places are to be categorized as utter fails.
Andreas also mentioned another thing and that was the human relation to a smartphone compared to a laptop. His example was asking somebody about borrowing their laptop to access webmail. Most people would answer positively. But requesting the same thing, but with an iPhone would result in a negative response, He called this the toothbrush relation. Apparently we are more attached to our smartphones than to our laptops.
In a graph presented by Andreas with some statistics from IDG, it was stated that users browsing using smartphones would stay on pages for a longer than people browsing from computers. I have always found it puzzling how these numbers could be obtained for a asynchronous protocol, but what was more puzzling was the difference, which was significant. I have not been able to obtain the report, but I will try and add it if I succeed.
Andreas had only one request and that was: Making your current website work on mobile was the least effort you should invest.
Next up was Patrik Axelsson (@patrikaxelsson)
Patrik had a more technical approach. Doing some combinatorics over Android OS versions and requirements for resolution support you would end up with about 48 modes, which should be tested. Patrik referred to a scoreboard, which would assist in the decision making on going either native (app) or web. I am sorry I cannot refer to the scoreboard, if I am able to obtain it, it will be listed on the entry.
UPDATE: presentation from Slideshare see slides 23 and 25.
Further more Patrik recommended doing an analysis first and then pick the paradigme (app or web) based on what your requirements to the usability etc. would be. Then you would be able to make an informed decision – if the first question you ask is what method to use for implementation, you are doing it wrong.
Another recommendation from Patrik was very much in line with the presentation by Andreas. Do mobile web first, then find out what scenarios need special and additional treatment and might need a dedicated utility, perhaps implemented as an app – again make informed decisions.
Last presenter was Björn Hedensjö (@bjornhedensjo). Björn sort of followed up on the two previous presentation and I did not take any notes.
My reflections on the seminar was some that some important factors were missing. My thoughts on the topic are very much inline with the points made by Patrik.
The discussion was very much on distribution and usability. Where I think the debate lacked an aspect, which was raised at Internetdagen in Copenhagen. Here it was mentioned as one of the topics on what would influence the future of the Internet. One could argue that it is a tad vision-less, but it this context it made more sense: convenience.
The smartphone experience is very much one of convenience. So I think that when thinking app or web you have to defined what is convenient. The use of Internet resources can pretty much be hidden from the user taking AJAX for web, app solutions and especially cloud trends into consideration.
All in all an educational talk, with some good pragmatic pointers.
I had no prior knowledge of ISOC and it’s work and it was quite interesting to get an overview of an organization, which primary focus areas of focus are also important in my opinion.
Lynn gave en overview of the work done in ISOC. ISOC emphasizes a model based on distribution and collaboration. Some of the key issues have been emerging countries and economics.
The ISOC regards the Internet as an enabler for everyone. IPv6 is a high priority if we want to continue evolution of the Internet and we want to enable more countries and people.
Lynn focused very much on the multi stakeholder principle and the capabilities exposed when using such a principle. The philosophies dominating the ISOC work are based on collaboration, openness and democracy.
Lynn mentioned the importance of keeping the virtues on which the Internet was built alive. The Internet as we know it is based on open standards, it supports a large variety of business models, meaning diversity and innovation can thrive more freely.
The most challenging years are ahead and one of the bigger problems facing the Internet and the principles on, which it has been built is censorship. She mentioned DNS blocking, where we can observe the problem of technology used for censoring for non-technical problems. Lynn mentioned that 44 countries are doing aggressive filtering, a year ago it was 4.
I am not sure about the exact numbers, but the picture is pretty clear and the development is most concerning. Censorship monitoring services should be able to give more exact data on this development.
The continued work in ISOC and related organizations is of outmost important since what we have observed over the recent years is that the internet is a enabler for, development, economic growth and freedom of speech and expression. Lynn explicitly mentioned Article 19 from The Universal Declaration of Human Rights.
This is my first time attending Internetdagarna in Stockholm/Sweden. I attended our Danish equivalent in October, but had been informed that this was much bigger and with many more topics and tracks and I must admit that I am usually pretty busted after a technical conference, but day 1 of Internetdagarna did most certainly not disappoint and I am exhausted
after a long and very educational day. Here follows my notes and some reflections on the various talks I have attended.
I have divided the blog post in to the separate talks, since they became somewhat lengthy.
I started out with the security guru, Bruce Schneier (@bruce_schneier / @schneierblog), whom I have been following on twitter. It is always interesting with keynotes, since the speakers are often given free hands in their choice of topic and today Bruce Schneier did not disappoint.
Bruce had on a previous occasion participated in a panel in Washington DC. where the term “Cyberwar” and its relevance had been debated. Bruce had come to the conclusion that the term “Cyberwar” is not really well defined.
Bruce went over several examples on different uses of the term, both to emphasize the ambiguity of the term and to demonstrate it’s widespread use in varies communities, ranging from military, to security and to media.
He brought up some interesting characteristics of what is often referred to as “Cyberwar” and compared this to conventional use of the term war. At the same time he described some of the characteristics of some of the events, which are have been categorized as “Cyberwar” by various groups, without consensus or definition of terms in place.
Bruce did my no means undermine the danger of the events and activities related to what is often labelled as “Cyberwar”, but due to the vague definition, these events and activities get labeled “Cyberwar”, which makes it even more difficult for us to actually address what the problems and possible remedies would be. Bruce referred to this as: “cognitive confusion”.
Technology is spreading capability in a sense that we have not observed before. Compared to traditional weapons of war like tanks and airplanes, which are limited to governments and states. The weapons, which would play the primary role in a “cyberwar” are much more widely spread and more easily distributed and they carry no return address and there are no insignia or flags.
Also motivation of the attackers in these types of events are different. Some attacks might have their roots in what could also be the reasoning leading up to skirmishes or a war, whether this is based on belief, economics or culture. But these sorts of attacks are the same used by politically motivated activists or criminals. As Bruce describes it, on the Internet, attack is easier than defense.
I can comment here that the same observations where presented in AppSecEU in Dublin, which I also attended back in June (see also: blog posts from day 1 and day 2).
When we have problems defining who is attacking and why? The comparison to war no longer makes sense. Attacks might be government tolerated or even government sponsored, but we have no way of telling. The regular ways of handling nation state conflicts are via traditional government channels using diplomacy and treaties.
When the aggressor however does not match the above, it is a task for the judicial system. But the legal frameworks fall short, for the same reasons.
This lack of clarity makes the case for a shift of power in the US and a move towards more military jurisdiction over civilian jurisdiction. One of the basic arguments is based the fact that the military protect traditional infrastructure like power grid and water supply, so why should they not protect something as essential as an Internet backbone.
Bruce mentioned APT (Advanced Persistent Threat) as one of the attack vectors and where our security measures today, which he describes as relative to the attack vectors, APT requires a more absolute security due to the complexity of these sort of attack.
What I take away from the talk is that we need to update our perception and knowledge on these aspects of the Internet. Attacks are not going to go away and we need to be able to handle these threats in the using the proper means and categorization.
War is most certainly not the answer.
After a super day 1 at AppSecEU 2011, it was not hard to get out of bed and go to the conference, well a bit hard after a social event at a local pub where I mingled with other attendees including Janne Uusilehto who was to give the first keynote on day 2 and I promised him to be there…
Apart from the keynote by Janne Uusilehto, I had planned to attend the following sessions:
- An introduction to the OWASP Zed Attack Proxy by Simon Bennetts, OWASP
- New standards and upcoming technologies in browser security by Tobias Gondrom, IETF WG
- Six Key Application Security Program Metrics by Adrian Evans, Whitehat Security
- Keynote: Alex Lucas and Liam Cronin, Microsoft
- Putting the smart into Smartphones by Dan Carnell (@danielcornell), Denim Group
- Practical Crypto Attacks Against Web Applications by Justin Clarke (@connectjunkie), Gotham Digital Science
- Keynote: Ivan Ristic, Qualys
Unfortunately Ivan Ristic had lost his voice, so the talk: Six Key Application Security Program Metrics by Adrian Evans was moved to be the last keynote of the day.
I attended the keynote by Janne Uusilehto, but I did not take any notes: I was present and online, but in recovery mode. Janne’s talk was pretty much along the lines of Brad Arkin’s keynote on day 1. It was pretty high level stuff and I the organization I am working for and on a personal and professional level I am not really there yet. I am however really putting a lot of effort into deciding how our SDLC (Software Development Lifecycle) is going to be shaped. Many good points and pointers from Janne Uusilehto and just his listing of resources and organizations are painting a picture that the industry as a whole is coming together to define, implement and promote application security.
Next was an OWASP project presentation. The two OWASP project presentation had really provided much useful information so I did not hesitate with attending ‘An introduction to the OWASP Zed Attack Proxy’ by Simon Bennetts.
Simon Bennetts who has a background in software development made a very interesting point aimed at developers.
- You cannot build secure web applications unless you how to attack them
Unfortunately pentesting is a black art according to many developers. Simon Bennetts is however the lead on a a project, which is a tool aiming to help developers in doing pentesting.
The project is a (attack) proxy based on the Paros Proxy. It integrates with a lot of other OWASP tools like DirBuster and other projects are integrating with the Zed Attack Proxy.
Some of the interesting aspects will be the ability to run headless mode so the basic pentesting can be integrated with existing tool chains as part of automated test runs.
Next up was: New standards and upcoming technologies in browser security by Tobias Gondrom from the IETF WG.
The IETF is working hard to define new capabilities in the HTTP protocol to enhance the overall security, including:
- A new standard to unify the way browsers detect content-types, using sniffing based on standard algorithms to detect content type
- DNSSEC for TLS
- An X-Frame header to control use of iframes
- Content security policy, a feature where a policy-uri, a location to a policy file and a report-uri, so if policies are not adhered to the site gets a notification of the violation
- ‘do-not-track’. Will further be enforced by local legislation, this feature is only on the horizon however. This is quite interesting in relation to the new cookie rules in the EU and it is going to be interesting to see, how this is welcomed, since the technology does not defined the rules just the capability.
Tobias Gondrom mentioned the following resources: IETF WebSec and W3C App Sec, unfortunately I did not get the URLs noted down. Tobias Gondrom made the same mistake most of the presenters did at AppSecEU and that was ending with a slide with a question mark, instead of stopping it at the resources and references slide.
The next session was cancelled, but we got a somewhat late notice, so I just stayed in the room and coded some Perl.
After lunch the next keynote followed with Alex Lucas and Liam Cronin from Microsoft
Alex Lucas, started out with giving a brief historical overview of exploits from a Microsoft perspective. He then went on to describe the initiatives made at Microsoft. Again like Adobe and Nokia, these large corporations are really taking application security and security in general very seriously and the result of their work really seems to be paying off. Other corporations can really learn from these examples.
Alex Lucas listed some interesting points, which was very much in line with the Adobe initiatives presented by Brad Arkin, I can highlight the following:
- “Plan the work, work the plan”
- Updating is important and for Microsoft much more proactive
- Automation can be used to eliminate whole classes of exploits
- Automation can exchange human effort
- Let the tools do the job (compilers, tests, fuzzing, static analysis etc.)
- Security reviews as part of SDLC
As Alex Lucas formulated: “the defender must close all holes, if one exploit is used, the attacker has won”.
Following Alex Lucas was Liam Cronin, who gave a presentation of openness in Microsoft. Microsoft are going more open on many points, which is a good thing, Liam mentioned the following points:
- Substitutability, reduces lock-in
- Open Data: http://www.odata.org/
- Open Government Data Initiative
- Data formats must be non-propriety
- ODF and OpenXML
- Windows Azure (Microsoft cloud), open to PHP, Java and Ruby in addition to Microsoft’s own .Net technologies.
In general a good presentation and again it can be mentioned that Microsoft is participating in the different fora related to security, like http://www.safecode.org etc.
See also: http://www.microsoft.com/openness
The next talk I attended was: Putting the smart into Smartphones by Dan Carnell (@danielcornell).
Dan Carnell gave a very interesting presentation on Smartphone application development and pentesting.
One of the key points Dan Carnell made was: “Smartphones are always in a hostile environments”, or to rephrase: with mobile applications the code is on the device and devices get lost, stolen and change owner all the time, so attackers can reverse engineer applications and/or access data not deleted from the device.
The development recommendations from the talk was:
- Don’t store data
- Communicate securely
See also: http://smartphonesdumbapps.com/
The following talk was practical yet very abstract, but Justin Clarke (@connectjunkie) really gave an insightful presentation entitled Practical Crypto Attacks Against Web Applications.
My notes from this talk are somewhat cryptic themselves, so it is going to take some time to process them, but key points I can bring here are:
- If you are doing cryptography do not be afraid to consult an expert
This is actually good advice. Cryptography is hard and often we trust in cryptographic practices and implementation without really understanding them. Justing Clarke demonstrated some practical examples of how basic use of cryptography really did not provide much security. In addition Justin Clarke mentioened the following resources:
- Github: padBuster
- stack exchange is a good resource for cryptographic advice
Last but not least was the last keynote of the conference, as I mentioned earlier Ivan Ristic cancelled and instead Adrian Evans of Whitehat Security gave the last keynote:
Adrian Evans spoke really fast and his talk was very much aimed at security professionals. He did however mention a lot of interesting resources and he addressed the lack of general metrics of how we are doing as an industry.
One thing that struck me, which I had heard several times of the course of the conference was this trust in frameworks. At the GOTO Copenhagen developer conference, frameworks was really getting a serious beating for not being the best way of doing development. The gap between the security and development communities is still somewhat wide and I think that fora like OWASP really can help in trying to close this gap and get people to discuss the future of application security.
I am not really doing Adrian Evans much justice, since what he actually said was that frameworks are good for good for authentication but not for authorization, but the general positive view on frameworks at the conference has been quite clear.
Another point made by Adrian Evans was: “We cannot say ANYTHING about likelihood”. OWASP has big challenges ahead, which brings me to say that I really got a lot good information from this conference. I can highly recommend attending the conference in the future or at least joining your local chapter of OWASP.
Thanks to all attendees, organizers and sponsors – I had a great first AppSecEU conference.
I have attended several meetings at my local OWASP chapter and they have always been very interesting. I am by no means a security expert, actually this quote by the character Tracy in Cory Doctorow’s ‘Knights of the Rainbow Table’, I heard the other day in his podcast gave me some comfort.
“It’s okay everyone sucks at security”
At the same time I am most willing to learn more about security in order to become a better developer. For the first time I am attending a security conference, the AppSecEU 2011 in Dublin, Ireland.
The day started out with double breakfast, first at the hotel, second at the venue. The conference is really nice and reminds me alot of the many YACP’s I have attended over the years. The attendees seem very much to be practitioners and people involved with all aspects of security in their day jobs.
These were the talks I had set out to attend:
- Keynote: Brad Arkin (@bradarkin), Adobe
- Building a robust security plan by Narainder Chandwani, Foundstone
- The Buzz about Fuzz by Joe Basirico, Security Innovation
- Keynote: Smart phones, app-stores and HTML5 (ENISA) by Dr. Giles Hogben
- Python Basics for Web App Pentesters by Justin Searle
- Secure Coding Practices Quick Reference Guide by Keith Turpin, project leader. OWASP/Boeing
- OWASP AppSensor Project by Colin Watson
Here follows selections of my notes and some reflections on the different presentations.
Keynote: Brad Arkin, Adobe
Brad presented the Adobe Secure Product Lifecycle (SPLC). He started out by talking a bit about attackers and motivations and categorizing these and comparing these to criminals. This was a bit fuzzy to me, but I think I got the picture, his conclusion was anyway that we need super heroes with green lasers – not really.
Instead we should focus on:
- Hard work
- Repeatable and verifiable processes
- Security must be a priority in all stages of development
He mentioned that Adobe produced popular products and therefor could popularity be of interest to exploring attack vectors and exploits in Adobe products. This is a pattern we have observed before in the industry and therefor is makes sense.
He then went on a described Adobe’s Security Strategy. Which consists of a lot of different practices, I am not going to go over all of them, but here is a basic and non-exhaustive listing:
- Keep customers up to date, by simplifying updating and installation of Adobe software
- Safe and secure code (Adobe SPLC)
- External engagement, industry, partnerships, threat landscape modeling etc.
- Swift and decisive responses to security incidents are important
- Defensive coding and security testing and should be a part of general processes
- Features should be scrutinized using thread modeling
- Use the tools like compiler flags
- Static and dynamic analysis
Brad talked a lot about training and Adobes approach to training and certification was quite interesting, in general however it seemed that security awareness throughout Adobe was the main thing.
In addition he provided these resources:
Next I move on to ‘Building a Robust Security Plan’ by Narainder Chandwani, Foundstone
He talked about outlining a security plan and a several other resources like:
- knowledge repository
- a security impact profile
He had an elaborate point system he referred to, but did not present directly. It should be represented in his paper. In general the idea was to create a database of all your applications and classify them together with a security impact profile based on some of the following metrics:
- classification of data
- possible compliance issues
Again knowledge and information was key factors and in general I could agree with Narainder Chandwani’s idea, but his generalized approach to security impact profiling should preferably be something more along the lines of either OWASP top-10 or similar work from ENISA.
I missed out on the fuzzing presentation at my local OWASP chapter in Copenhagen, so I thought this was a good chance to get some education in fuzz, so I went to see: The Buzz about Fuzz by Joe Basirico from Security Innovation.
There is not much too fuzzing as such. Fuzzing is all about attempting to break or misuse applications using generated malformed values. You can divide fuzzing into three categories:
- random fuzzing
- seeded fuzzing
- format aware fuzzing (a variation of seeded fuzzing)
You send a request, process the response, fuzz and then repeat.
Joe Basirico emphasized that input validation is first line of defense and mentioned that input comes from everywhere, like filesystems, database. So the classical user facing input validation might not suffice – a very interesting and thought provoking idea, which got me thinking.
Things to be aware of when doing fuzzing are:
- Control characters
- Checksums and verification blocks
- Order and required sections
One should consider blacklisting vs. whitelisting input data to tighten the security on the first line of defense. I might get back to this topic later since I am trying to collect all my notes on defensive programming in an article.
He mentioned the following tools and resources:
A lot of good and practical information, which could result on more work on my side, since there are a lof of aspects from Joe’s presentation I would like to dive into.
After a lunch break, where I walked into town to find the local Mac store another keynote was scheduled.
Smart phones, app-stores and HTML5 (ENISA) by Dr. Giles Hogben
Dr. Hogben presented some work being done by ENISA in the smartphone area. Smartphones are very interesting from a security perspective, since they have all sort of sensors, IP and a lot of CPU power. Issues in regard to smartphones are both related to security and to privacy and a lot of aspects of these new technologies are using best practices from other technologies, but at the same time the many features and opportunities open up for new potential threats.
ENISA has produced a smartphone report together with OWASP. It lists:
- top 10 risks
Of all the smartphone developing companies in the world only one had not participated – Apple, not surprisingly, but a bit disappointing. Because iOS devices are by no means securer than other smartphones and considering the point from Brads keynote on Adobe’s popularity being a challenge for Adobe, Apple should consider participating in projects like the one done by ENISA.
In addition ENISA is working on a report and several other deliverables on HTML5.
In addition to analyzing the HTML5 specifications, the specifications had also been compared against one another and they suffered severely from underspecification. A point, which seem very much to be in line with the presentation by Bruce Lawson from Opera I saw at GOTO Copenhagen.
The HTML5 work is very much in progress, but it is still possible to chip in, see also: http://www.enisa.europa.eu/act/application-security
I then went to see: Secure Coding Practices Quick Reference Guide by Keith Turpin, project leader, OWASP/Boeing.
The Quick reference guide is a 17 page document developed by OWASP, it originates from Boing who turned over the ownership and copyright to OWASP. It aims to be technology agnostic and focusses on what to do, not how to do it.
Some of the aspects Keith Turpin highlighted was intended (requirements) vs. unintended (what the application actually can accomplish) functionality. The concept of restraining the allowed unintended functionality, was similar to the problem described by Joe Basirico (see the earlier section) and again this is something I am going to revisit in my write-up on defensive programming.
Keith Turpin made a point that we have to evaluate the whole stack and the environment of the application and applications, since operations and application management might, change the context in which our application operates and therefor the security aspect and thread modeling might have to consider different factors.
The Quick reference guide is in checklist format and is currently being revised to be come even more technology agnostic and hopefully the points will be enumerated using and overall enumeration guideline from OWASP so it will be easier to cross-reference between OWASP documents and external resources.
As a developer I found this talk very interesting and I am looking forward to examining the Secure Coding Practices Quick Reference Guide. I moved on in development mode and went to see: Python Basics for Web App Pentesters by Justin Searle
I do not have many notes from this talk, the presenter Justin Searle dissed Perl for no apparent reason, something I thought we were over years ago. He described how he did pentesting using Python and then referenced to a Google code project with all his templates. The only thing that bothered me about apart from the Perl thing was that he kept saying templates when he meant boilerplates.
The last talk of the day was: OWASP AppSensor Project by Colin Watson. This talk was off to a bumpy start. First the Apple computer Colin was using had issues getting the right resolution needed for the presentation, then Colin stated the following:
“If we do not know if we are under attack or whether we are being exploited we are doing security wrong”
It took me some time to understand what he actually meant and as he went through the presentation it all of a sudden made a lot of sense. The idea behind the AppSensor project is to do more application aware monitoring of in our applications so we can take countermeasures when something unexpected happens. The idea is to put detection point in key points in our application and then monitor these and take action when something unexpected occurs.
The counter measures and actions can be anything based on our application, context and circumstances.
- locking down user, functionality or application
- limiting access
- blocking IP
- alerting user / admin
I did not get all of the points listed by Colin noted down, but you get the picture. Colin mentioned something that was in line with Joe Basirico’s presentation about input. So if we put a detection point in the between our database and the application we can also measure when a user is retrieving an unexpected high number of records from the database. Joe Basirico emphasized that data from a database is also just input and it should not necessarily be trusted as is.
All in all a very educational day and I am looking very much forward to day 2 of the AppSecEU 2011 conference.
This is the corporate blog of logicLAB. A software development company based in Copenhagen, Denmark