Posted on December 30, 2008 in Video by adamNo Comments »

I’m trying to return to the habit of watching a video on something every couple days (call it an early New Year’s Resolution if you must) and so I’m starting to look at what was queued up before I got too busy to do it.

Today’s video is a talk that Ross Anderson gave at Google called Searching for Evil. In it he iterates through a number of the major types of scams that are online from phishing to 419 to Canadian pharmacies. I’m not sure what I was expecting, but it turns out that that I now have the perfect video to give people who are security clueless about how to spot these kinds of scams. (You know, the people get sucked in by every hoax and ‘warning’ that arrives into their mailbox). I’m not going to iterate over each scam as the video is needed to truly appreciate it, but here are the rest of my notes.

  • Ross’ website has a tonne of links about the security economics
  • The underlying cause for a lot os security failures are the incentives around not doing them
  • Adverse Selection
  • Wicked people go out of the way to get seals of approval from reputable organizations thus making the seal of approval itself a red flag for whether something is a scam
  • To break up a system, target the bottlenecks
  • Irrevocable payments are a common denominator of evilness (such as Western Union)
  • Assumptions about identity validity / assurances are highly geographic specific (see the Chinese gymnastics team at the recent Olympics)
  • If you can program it, it is administration. Everything else is management
  • A bank is just a perl script these days
  • Never underestimate the stupidity of the public
  • Plagiarism detection is a useful tool in identifying evil sites
  • The people most trusted by the public are academics. Fool then and you inherit their trusted followers



Direct link here.

Posted on June 19, 2008 in Quality, Video by adamNo Comments »

I found a nice little video via a thread on /. called How To Teach a Healthy Dose of Skepticism. The video itself is called Here Be Dragons – An Introduction to Critical Thinking by Brian Dunning who runs the Skeptoid podcast. The title is somewhat misleading as it is directed to critical thinking in marketing (more or less) rather than in general, but it is still worth the 30 minutes or so to watch. Oh, and it is free. Of course, you’ll want to make your own notes, but here are mine.

  • With a made up concept and a few words the unknown becomes simple and satisfying
  • Pseudoscience – an idea that claims to be real but is not backed by any real science / evidence
  • If we don’t test / experiment we don’t learn / progress
  • Common warning signs to identify pseudoscience
    • Appeal to authority – white lab coat, celebrity endorsement, mentions certification
    • Ancient wisdom – We shouldn’t care that the ancient Chinese thought something works, where is the proof that it actually works
    • Confirmation bias – When we remember ideas that match our beliefs and forget those that don’t
    • Confuse correlation with causation – Just because two events look the related they might not be
    • Red herring – Irrelevant distractions that no way address the item under discussion
    • proof by verbosity – it’s not the quantity of information, it is the quality of the information
    • Mystical energy – energy is defined as measurable work capability; anytime you hear energy being used, substitute the real definition and see if it makes any sense?
    • Suppression by authority – there is no business reason to suppress science or invention; new things make money
    • All natural – lots of natural things are harmful too and the non-harmful ones are often synthetically created to be more effective/easy to produce
    • Ideological support – good science is done in the lab, not the courts/media/blogs
  • Apparently Brian sleeps with a clip-on mic?
  • The Law of Large Numbers disproves ‘psychic connection’ / ‘precognition’
  • There is a nice explanation of the triple-blind FDA trial
  • Why do smart people think crazy things
    • Humans like the new, cool thing
    • Simple, easy answers are seductive
    • Scientific ideas are always called ‘theory’ and so subject to improvement so there is a speck of a chance that they are wrong
    • People lack the tools to think critically; which you have to seek out yourself. Pop culture will not give it to you
    • Embrace the information that meets the scrutiny of testing
    • Brian’s reading list

    Of course, you have to think critically about everything presented in the video.

Posted on April 24, 2008 in Quality, Video by adamNo Comments »

I tend to think that the security of a system should be tested before any other area (usually) so it’s not surprising that Neil Daswani‘s talk What Every Engineer Needs to Know About Security and Where to Learn It caught my eye. Unfortunately, if this is truely what every engineer needs to know about security then it is no surprise why so much insecure coade is floating around. Seems what every engineer needs to know is how to buy his book and/or visit his learn security website.

I did take some notes through.

  • I did not know, or did not know I knew, that XSS, SQL Injection and classical buffer overflows are in the same bucket of attack class; command injection. For some reason I thought they were independent but this does make a certain amount of sense.
  • According to the Security Focus Vulnerability database the following types of problems are increasing
    • Design Errors
    • Boundary Conditions
    • Exception Handling
  • Using the same data set, input and access validation problem rates are holding steady
  • Like most things, there is more than one such data set and they all measure things differently, however the top 4 problems seem to be
    • XSS
    • Various Injection holes
    • Memory corruption
    • DoD
  • Regardless of where the data is coming from there is an increase in the number of detected vulnerabilities. Of course, does that mean that we’re just better at detecting these or is the number being written increasing? (Likely both)
  • Neil thinks every engineer should be knowledgable in the following areas
    1. Secure Design – least priviledge, fail-safe stance, weakest link, etc.
    2. Technical Flaws (that can result in attacks) – cause, effect
  • Universities don’t teach security. A sweeping statement, but generally true
  • Security is a process, not a productBruce Schneier
  • Recommended Courses
  • Recommended Books
  • Recommended sites



Direct link here

Posted on December 12, 2007 in Quality, Video by adamNo Comments »

Bassem Elkarablieh is a Ph.D. student at the University of Texas at Austin. The subject of his thesis is ‘Assertion-based Repair of Complex Data Structures’ which is a pretty cool concept. I’m not sure if I would be all that comfortable with an application that implements it, but it is cool nonetheless.

  • Normally, when an assertion fails your application terminates and you restart it duplicate and debug the problem. But what if you cannot terminate the app and debug it? Why you train it to repair itself of course!
  • However, can we guarantee that just because the structure of the data is corrected that the state of the program is correct? Well, no. But it might be one that allows the system to continue without crashing. But do we want it to? I tend to think not as I cannot predict or trust the application from the repair point forward.
  • As an implementation note, the developer needs to provide the underlying structure for the data rather than through discovery.
  • Which leads to when do you call the magic structural integrity assert?
    • In apps that need high reliably, every garbage collection
    • Everywhere else, any time the system throws and exception might be a good determining heuristic
  • The assertions describe what the structure should look like, repair routines describe how to fix it
  • Apparently similar solutions are used currently inside IBM’s MVS and Lucent 5ESS switches
  • Some pretty big gotchas:
    • If the repair routine is complete, then the repair can be complete. but if there is a bug in the repair routine then your program might not crash, but you cannot trust it (not that you really can even if the repair was complete)
    • If the data itself is corrupt as a result of the structural issues, you cannot repair it; only the structure
  • Something for the future (and is being studied by a colleague of his) — automatically generating and inserting a fix based upon the repair action that was necessary



Direct link here.

Posted on December 11, 2007 in Quality, Video by adamNo Comments »

A couple of weeks before I met Ryan Gerard at CAST this year he was speaking at Google’s Seattle Conference on Scalability. The gist of his talk was that running all your regression tests all the time is inefficient and if you could determine which ones would get you the most bang-for-buck for a particular build then you are much better off. Seems like a good theory to me. Of course, he recently left Symantec so we’re unlikely to get periodic updates on how things are progressing.

  • Regression testing
    • doesn’t scale
    • often tests everything instead of just the things that actually changed
    • is people intensive
  • So, given a set of source code changes, determine the set of tests that need to be run
  • There are (at least) 3 different ways to associate source code deltas to test cases
    1. requirements – test cases checked-in for this requirement are associated with this chunk of code
    2. defects – last time this code was checked in, this test case failed
    3. build correlation – code was checked in for this build failed this set of test cases
  • When you start, the requirements type will be the most (only) important type, but over time the build correlation becomes primary coupled with the defects
  • The implementation of this seems to have your tests skewed towards the automated side of the scale vs. the manual or exploratory side.
  • Some policies that make this possible
    • end-to-end traceability for requirements
    • in bug fix check-ins, include the bug number in the comments so you can parse the data and determine which files changed
    • record the changes that are the diff between the two builds
  • I’m pretty sure Ryan suggested that you integrate with your build system first, but if he did not I (also) think that is a smart place to start



Direct link to video here.

Posted on December 6, 2007 in Quality, Video by adamNo Comments »

Early in October, the Agile Alliance held the Agile Alliance Functional Testing Tools Visioning Workshop. Part of the workshop involved lightning talks which have been put online. Yes, I’m a little late watching and commenting on these, but better late than never.

Elisabeth HendricksonWhy Automated Acceptance Tests Are Crucial

  • repeatability in testing is overrated
  • exploratory testing is by its nature not repeatable
  • therefore, expectations should be automated
  • these expectactions are the acceptance criteria

Elisabeth HendricksonA Place To Put Things

  • Many things succeed because they have a structured place to put things (xunit has setup, test, teardown, fit has tables and fixtures)
  • functional tests don’t have have nice places to put things (models, etc)
  • her proposal
    • expectations about externally observed behavior : in version control (in text, not binary)
    • fixture code : in version control

Naresh JainProTest: Framework for Prioritizing Tests For Dynamic Execution

  • build test suites dynamically
    • create a dependency between classes and tests
    • complexity
    • tune the amount of feedback you get depending on the scope of the change
  • faster feedback is great, but there is a cost

Antony MarcanoTo Test or Not to Test

  • fit is really good at communication, not automation

Brian MarickBoundary Objects

  • boundry objects sits between two social worlds and allow them to interoperate together
  • don’t argue about the definition of a boundry object, just use them
  • Brian’s paper on boundary objects
  • packages bring theories along with a tool and by using the tool the theory infects
  • using a hybrid language in a trading zone, two groups can communicate

Jim ShoreDoes it Work? Is it Done? Is it Right? (Make it Light!)

  • functional tests
    • expensive
    • slow
    • brittle
    • don’t deliver directly to the product
  • we want to know 3 things when testing, see the title
  • TDD is part of the solution as it confirms the developer’s intent (does it work?)
  • involve a ‘customer’ to determine whether they are done
    • actual customer
    • ui designer
    • etc.
  • concrete examples determine if it is right
  • fit is a way of automating your examples

Kevin LawrenceWhat Worked (and what didn’t)

  • the tutorial is very good / effective oracle
  • “pretend you are speaking to an expert user over the phone”
  • examples make better tests than tests

John DunhamFresh Eyes: What’s the problem

  • no sound

Gerard MazarosWish List Continued & Misc. Thoughts

  • no sound

Brian MarickLet Them Eat Cake

  • redefines the chasm curve
  • rebukes the non-passionate ‘it’s a paycheque’ tester
  • give up the conservatives and concentrate on the people who care
Posted on November 11, 2007 in Quality, Video by adamNo Comments »

Awhile back I posted a link to all the DefCon 15 videos. I’ve started to go through them now. There is waaaay too many of them, but I’m game to go through them.

Broward Horne talks about click fraud and how simple it is to perpetrate it in Click Fraud Detection with Practical Memetrics. He uses a number of heuristics to detect the potential likelihood of click fraud. One cool thing mentioned is a paper by Google called The Anatomy of Clickbot.A.

Dan Kaminsky, Director of Penetration Testing Services at IOActive talks about exploiting some of the core technologies and assumptions that the internet is based around in Design Reviewing the Web. He also gets points for the best quote in a long while: Design bugs are like zombies; they come back from the dead. Essentially, he uses the Same Origin Policy to hack itself using bugs in Flash that were fixed in Java, etc. back in 1996. Two other notable items

  • A hacker’s career goal should to be the hacker in the room when dumb decisions get made
  • Imagine how much money you could make if you could sell the top link result in google

To solve the second point, he suggests that we are all going to have to have all our content running over verifiable, secure connections.

How To Be A WiFi Ninja is a wonderfully named talk by Matthew L Shuchman (pilgrim) of WarDrivingWorld. Naturally, he had to define what the Ninja Code is, which I’ve reproduced.

  • Determine needs and objectives
  • Never trust the manufacturers limitations
  • Make changes to existing setup(s)
  • Access wifi at extended range and with greater speed

Hopefully it is as obvious to the reader how we can twist this around as a set of steps to do better testing. The key thing that the talk illustrates is how important it is to know the tech you are working with. Sure, having domain knowledge is important, but knowing how it is built is also critical!

Posted on November 11, 2007 in Quality, Video by adamNo Comments »

Joel Spolsky has been on a marketing tear this year. He released a new book (Sorry, didn’t like it), been on IT Conversations, in ACM Queue and most recently on a World Tour showing off FogBugz 6.0. I missed the Toronto event due to a family commitment, but was able to see the Dog ‘n Pony show they recorded in Austin, TX a couple weeks later. Here are the notes I would have taken in Toronto.

  • Wow, FogBugz is pretty cool. Of course, the person who owns the product is doing the demo, but…
  • Estimates are everything, especially since the driving feature of the new version is EBS (Estimate Based Scheduling)
  • Bugs reported are often unnaturally capped due to people limits in QA departments
  • The MS Project team doesn’t use MS Project to track their schedules
  • I like it when people point out scenarios where their product won’t work as well as a competitor.
  • Software Development should be just one line in the project plan
  • If something doesn’t work, degrade gracefully
  • There were 3, maybe 4 women in the audience
  • try.fogbugz.com for a trial copy

I tried to embed the video, but for whatever reason it wasn’t working. So here is the link to the video.

Posted on October 12, 2007 in Quality, Video by adam1 Comment »

All the videos from this year’s DefCon (what is DefCon?) can be found on YouTube through this search. In order to test the way the bad guys are going to use your application, you have to think like them.

And yes, I know that it is not just black hats in attendence. But everything the white hats present in the name of further expanding the field of computer security is going to be taken and further by the black hats. Thats just the nature of the game.

Posted on October 8, 2007 in Quality, Video by adam1 Comment »

Before I start this summary, it should be pointed out that there is a bug with how Google indexed this video. Every other video’s title begins with ‘GTAC 2007’, but this one begins with ‘GTAC 07’ which means that it could get omitted from search results quite easily. This happened to be the one presentation I was most looking forward to, but I thought it was cancelled until I saw the GTAC playlist; I was originally having new videos pushed to me via an RSS search. Where do we log bugs against the mighty Google index?

Anyways. As mentioned above, I had high hopes for this presentation and in the end, they were somewhat dashed. This video as an experience report from two people who work on the Ringo framework that Google has written (not yet publicly available, but will be once they rejig some proprietary parts out of it). I had hoped it would be a tutorial type session similar to some of the other ones on DSLs. There were a few tidbits worth recalling though.

  • Requirements vs. desirements — gotta love those new words
  • The framework is maintained by a dedicated team which is critical I think. Having written and maintained a couple of these (though certainly not this ambitious) I can attest to how hard it is to maintain / extend a framework while simultaneously trying to write more tests that use it.
  • UI elements are all handled through the same methods regardless of type. This means an object type change does not have a corresponding test code change — very cool
  • They even wrote a custom extension to Firebug to give them some of the information they need for object mapping



Direct link here.

Next Page »