Just another ComMetrics – social media monitoring, best metrics, marketing metrics weblog

Next Generation Access broadband networks (NGA) – EU urges telecom groups to give rivals access

September 30th, 2008 · 1 Comment ·

Everybody is talking about the Next Generation Access broadband networks (NGA). In fact, some are claiming they will be the first when systems go online in 2012:

Next generation telecoms networks (December 2007) UK’s Parliamentary Office of Science and Technology Postnote, Nr. 296 (4 pages) [Online] (Available: Last Access: September 19, 2008)

Others have begun to put it into practice already:

ewz.zürinet. Telecom von ewz

But the biggest challenge is:

– who gets access (e.g., providers, firms) and how – triple play – digital radio/TV, Voice over IP, Internet (more services such as VPN’s etc. for organizations)?

– who manages the dependability of this network (is it vendor, owner and/or re-seller = telco)?

– what is a public e-communication network (city owns network, hardware vendor runs it, telcos and others act as re-sellers of capacity in network to their clients)?

Today, your utility company could own a fibre optic network that is managed by a hardware vendor. The latter supplied the infrastructure owner (utility company or municipality) with the Ethernet-Service-Switches as well as the Ethernet-Access-Switches and the fibre optics cable that were laid into the ground.

The hardware vendor has a contract to service the network infrastructure for the utility company. The latter sells capacity to re-sellers including telecom companies and cable TV service providers.

How do you assure satisfactory levels of dependability and reliability in such kind of ‘public e-communications network?’

Does the regulator deal with the infrastructure owner, the re-sellers and the hardware vendor responsible for network servicing and maintenance? In fact, a telecom re-seller could hire the hardware vendor to run its services on the utility company’s network (e.g., make best use of the capacity purchased and paid for each month).

Where does the buck stop, who is responsible for resilience?

What does the regulator do?

The work of the ERG (European Regulator Group) on NGA (Next Generation Access broadband networks) is available at:

Unfortunately the NGA group has not really dealt with the reliability and dependability of NGA networks.  This would be especially important to address because ever more often ownership may not be with those selling services and capacity on such infrastructure. In fact, those that re-sell may not even run the network.

The European Commission has seen the writing on the wall if this issue is not addressed in one way or the other. However, as its Press Release shows, its focus is on market issues and not resilience per se:

IP/08/1370 2008-09-18 Broadband: Commission consults on regulatory strategy to promote high-speed Next Generation Access networks in Europe

So if you have an opinion, work in the business or bring other skills to the table, you better take advantage and participate in the European CommIssion’s public consultation – you are given until November 14, 2008 to submit your contribution:
Commission launches public consultation on Next Generation Access Networks (NGA)

Bottom line

In the environment where the incumbent operator (i.e. former monopolist) owned all the infrastructure, things were a bit easier. The regulator dealt with one company regarding network resilience and security.

As a company, one made or tried to make sure that the Service Level Agreement (SLA) with the monopolist contained a provision about dependability and security. Possible, the SLA also stipulated financial consequences in case of a breakdown of services.

Today, the SLA agreement the firm might secure with the telco could entail a disclaimer. The telecom service provider such as Orange or Vodafone makes sure that it is not responsible for lack of reliability due to events outside its control. The latter may apply when the utility company’s infrastructure experiences problems or the network vendor running the infrastructure screws up.

Regardless of how the above is resolved to your organization’s satisfaction, it is vastly more complex to attain resilience and dependability of one’s e-communication today than  just a few years back. Unfortunately, regulators have to wake up and look beyond the obvious – assuring access to the infrastructure – and focus on the resilience issues as well (who is responsible, how is it regulated and assessed ….). Join the challenge and please remember:

1) market forces alone will not bring us more resilience without the necessary regulatory framework.

2) Competition focuses on short term matters (PS. remember the financial crisis?) such as price levels and profitability, however, assuring dependable and resilient networks requires focusing on long-term investments and building networks with the hardware/software architecture that delivers. This does not necessarily help profitability.

Get more about these issues regarding the improving of network resilience and robustness:

redundancy, dependability, reliability of public e-communications networks


→ 1 CommentTags: blogging · collectively · douglas · here’s · karr · reviewing · tweak · wuss

10 fallacies of distributed computing

September 24th, 2008 · No Comments ·

    Originally there were The 7 Fallacies of Distributed Computing. We just added fallacies #8 & #9 – if you are a Blackberry or YouTube user, start worrying and read on.

In late 1991, Bill Joy and Dave Lyon had already formulated a list of flawed assumptions about distributed computing that were guaranteed to cause problems down the road:

– the network is reliable;

– latency is zero;

– bandwidth is infinite; and

– the network is secure.

James Gosling, Sun Fellow, had actually codified these four, calling them “The Fallacies of Networked Computing.”

As history tells us, in 1994, Peter Deutsch drafted seven assumptions that network architects and designers of distributed systems are likely to make even though they might turn out be wrong in the long run.

In 1997 James Gosling added another such fallacy. The assumptions are now collectively known as the “The 8 fallacies of distributed computing.

L Peter Deutsch

Essentially everyone, when they first build a distributed application, makes the following eight assumptions. All prove to be false in the long run and all cause big trouble and painful learning experiences.

1. The network is reliable [e.g., power failures, someone trips on a the network cord, all of a sudden clients connect wirelessly, and so on or the software has a bug – see LSE example)].

2. Latency is zero (time delay between the moment something is initiated, and the moment one of its effects begins or becomes detectable).

3. Bandwidth is infinite.

4. The network is secure.

5. Topology doesn’t change [i.e. assumpiton that the arrangement or mapping of the elements (links, nodes, etc.) of a network ==> the physical and logical/virtual interconnections between different nodes remains stable].

6. There is one administrator.

7. Transport cost is zero.

8. The network is homogeneous.

9. The physical units of computation are logical units (Distributed Computing Fallacy #9) – assumption is that 10 servers might be better than 1 or 5 (ps.a logical unit is a unique connection to an application program that could malfunction, while a physical unit is often a hardware device – server – i.e. sometimes several logical units run on one physical unit).

10. Network infrastructure is redundancy compliant (e.g., see NYC banks use a network is not redundancy compliant” rel=”bookmark” href=”″> Taiwan’s 2006-12-26 and Egypt 2008-01-29 incidents).

The above still apply today, except that with computing in a cloud number 9 and 10 have become a real threat. Ever more web-based applications are running on server farms somewhere. Already, customers accessing services offered by their suppliers using Google or Amazon’s computing in a cloud services, have reported several service interruptions this year alone.

2008-02-19 – routing tables were screwed up resulting in YouTube not being accessible by some clients.

bottom line

Whenever critical mission applications require distributed or networked computing resources, one has to manage the risks by looking at the cost-benefit issues.

None of the above fallacies can be 100% eliminated. Nevertheless, the likelihood of a disaster happening can be reduced to acceptable levels if these fallacies are analyzed and discussed (e.g., software and network architecture).

Most importantly, all this comes at a price! Accordingly, unless public e-communications networks address the above fallacies, we might experience a disaster of some sort in the not so distant future.

Highly disruptive tolerant routing and data replication services – which your Blackberry RIM service fails to have, are essential if computing in a cloud using locations around the globe is supposed to work.


→ No CommentsTags: calculated · green · indicators · kpis · obtained · scorecards · thresholds · yellow

4 mio tax returns – Norway privacy blooper could result in surge of identity theft cases

September 20th, 2008 · No Comments ·

    Remember the UK privacy disasters:
    The latest Norwegian privacy blooper has again illustrated that building large systems entails risks including employees not following procedures. However, the larger the system the bigger the repercussions if something goes wrong.

2002 marked the first time when details of all Norwegian taxpayers returns were published on the internet. Of course, the head of the Norwegian data protection authority immediately asked for the practice to be stopped.

It took just about a year before the government, led by then-prime minister Kjell Magne Bondevik, passed a law restricting online access to a maximum of three weeks from the day of publication.

What happened now?

Norwegian tax authorities informed the public that they had sent CD-ROMs filled with the 2006 tax returns of people living in Norway to the editorial staff of nine news organizations (e.g., national newspapers, radios and television stations). These CDs, once accessed also contained the person identification code, something that should have been deleted before these data were sent out.

What does the personal identification number reveal about a person?

The person identification code works as outlined below.

    Each person has an identification number that is an eleven digit birth number. It is assigned either at birth or when the foreigner registers with the National Population Register.The number is composed of the date of birth (DDMMYY), a three digit individual number, as well as two check digits.Women are assigned even individual numbers, men are assigned odd individual numbers. In turn, the system allows Norway to uniquely identify people born between 1854 and 2039. Thereafter, a new system will have to be used.

    Interesting is also that people without permanent residence in Norway will be assigned a D-number upon registration with the population register. The D-number is like a birth number. The system adds the number 40 to the day of month the person was born.

Based on the above it is obvious that this number allows one to identify each person by providing his or her birth date, gender, residence status and so forth.

When did it happen?

The CDs where sent out during summer already. Why Norwegian authorities did not inform the public about this disaster until the media went public remains a mistery.

Nevertheless, this privacy blooper indicates that transparency is not working very well when it comes to privacy and data protection. The agency did not inform anybody about its mistake and refused to go public until the media wrorte about it.

What about responsbility and accountability – was ignored it seems.

Some cases of identity theft likely

Norwegian tax authorities, stressed that the documents containing the tax records and personal identification numbers could only be opened by using a secret code that took 30 spaces to enter during login to attain data access on the CD rom.

Nevertheless, a few editorial teams had gotten hold of this secret code that was available for a few days only, thereby getting access to the personal identification numbers.

Much data is available alread and has been used by some people. In turn, we expect a surge in identity theft cases in Norway in the next few months. Just because somebody forgot to check the CD’s before they left the store…. a 5 minute task, unbelievable but such things happen…

My take on this story

Once again, the case shows why using large schemes and databases as required for national electronic ID cards, tax records or biometric passports:

– puts people’s privacy at risk,

– increases the likelihood for a substantial surge in the number of cases of identity fraud compared to today AND

– the complexity of such huge databases makes their adqueate protection to assure data confidentiality and integrity a nearly unsurmountable challenge.

Similar to Britain’s cases mentioed at the beginning of this story, human error and lack of quality control are the root of this disaster. How Norway’s tax authority can make such a huge mistake whilst having best practice procedures regarding quality control measures in place is unclear. Actually, nobody checked the CD’s before the left the office, just unacceptable.

Viewing what is on the CD would have revealed the mistake, thereby allowing the destroying of these CDs and pressing new ones to be mailed to the media.

Is this too much to ask for?
As well, the tax authorities have so far been unable to explain how and why such a stupid mistake happened. What does the government intend to do to keep the likelihood of a similar case happening in the near future low? Endangering Norwegians privacy and increasing their risk for identity is one thing. Keeping silent about what went wrong (we failed to check, why), what will be done to avoid this in the future and so forth is irresponsible.

NO – Act of 14 April 2000 No. 31 relating to the processing of personal data (Personal Data Act) (17 pages) [Online] (Available: Last Access: September 19, 2008)

    The purpose of the Act of 14 April 2000 No. 31 relating to the processing of personal data (Personal Data Act) is to protect natural persons from violation of their right to privacy through the processing of personal data. The Act shall help to ensure that personal data are processed in accordance with fundamental respect for the right to privacy, including the need to protect personal integrity and private life and ensure that personal data are of adequate quality. This Act transposes the Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data into Norwegian law.

Norway’s Personal Privacy Commission has a December 8, 2008 deadline for “delivering a comprehensive status report outlining the challenges facing the protection of personal privacy” to the Storting (Parliament).

How prominently this latest privacy blooper involving all tax records will feature in this report is everybody’s guess. Nevertheless, this incident indicates that besides technical risks, human error tends to exacerbate threats regarding personal privacy and the risk for identity theft.


→ No CommentsTags: dddd

Dependability of public e-communication networks – ropes to skip 1

September 17th, 2008 · 3 Comments ·

    Resilience describes the ability of communications networks in providing and maintaining acceptable level of service in the face of various challenges to normal operations.

More and more we live in world where the use of information and communication technology is part of our daily lives. Hence dependability and network resilience is becoming ever more important for all of us. We began our series with an introductory post here:
Dependability of public e-communication networks – ropes to skip – introduction

Today we continue with challenge nr. 1:

1) Setting the dependability rules

Too many issues in estimation of the appropriate level of dependability and resilience of public e-communication networks involve judgment and subjectivity (e.g., what is a threat, private versus public network, financial impact of a disaster).
Both the resilience target and the content of the resilience target are arbitrary. The question that must be asked is:

“Does the regulatory framework and its implementation imply a higher level of dependability and resilience of public e-communication networks tomorrow than we experience today (see also Point 6 coming up in a future post).”

Challenge: Too many subjective issues about resilience or dependability of public e-communications networks involve people whose careers depend on pleasing ministers to make the assessment.

As well, competitive markets may not provide infrastructure owners with the long-term incentives needed to invest in dependable and reliable e-communication networks.

As last week’s London Stock Exchange outage taught us, however, the difficulty is in defining what systems are critical. As well, what is considered resilient may not be considered to meet that criterion by another party.

LSE outage – five lessons for achieving better network dependability

Interpretations are subjective and as the LSE trading platform was down, the opportunity cost of being unable to trade may have been somewhat down as well because of the current stock market chitters (i.e. the sacrifices in forgone profits are lower when opportunities are less plentiful). What can be done and what the LSE is willing to do for reducing the likelihood of such a shut-down from happening again is unclear.

In the LSE case, whatever regulatory framework was used (telecom, financial markets), the LSE trading network does not appear to be part of the public e-communcations networks regulated by Ofcom (the telecom regulator in Britain).

The amount of money involved with the trades executed on the LSE board as well as its importance for financial markets is considerable. In turn, one wonders if the TradElect proprietary platform that handles transactions for the LSE and the e-communications networks for traders used to participate in this market is not a critical part of the country’s infrastructure.

UK stakeholders as well as the financial market participants should address this and find a way to define what level of dependability is involved and the level of resilience required. We await a speedy solution but don’t hold your breadth.


→ 3 CommentsTags: blue · communication · defending · ministries · resilience · tangible · turf · wars

LSE outage – five lessons for achieving better network dependability

September 11th, 2008 · 3 Comments ·

    The London Stock Exchange (LSE) experienced a seven-hour breakdown that gave traders barely more than an hour of trade book-ending this Monday. Monday, TradElect a proprietary system developed by Accenture using HP ProLiant Servers and Microsoft .Net and SQL Server systems crashed due to overload and a software bug. Microsoft and Accenture are still investigating the root cause of the problem.
    We provide you with five lessons we got out of this case.The case illustrates why public e-communication networks must offer resilience that minimizes the risks for such network failures. The loss of connectivity betwen clients and the LSE’s trading systems forced the market to suspend trading shortly after 9:00 o’clock. Nobody seems to know the costs this has caused (loss of trades, etc.).

Recently I started a series of posts focusing on dependability and resilience of public e-communciation networks:

Dependability of public e-communication networks – ropes to skip – introduction

This Monday (2008-09-08) it happened, London’s stockbrokers were forced to twiddle their thumbs, as trading on thee main stock exchange shut down due to a software glitch.

On Monday the LSE claimed that the outage was not related to an increase in trading volumes. Instead, it blamed connectivity issues for the outage.

Here are some of the lessons – not necessarily in order of importance – we should take from this serious incident of network failure.

Lesson 1 – proper software and hardware architecture required

On Monday, TradElect, a proprietary system developed by Microsoft Technology for the LSE had an outage. Unfortunately, no technical information was provided

Later, LSE told some media people that a software problem crashed LSE pricing, leaving trading floors close to silent.

Without all the facts it is difficult to get a handle on this issue. Monday’s problems at the LSE were signifianct. Nevertheless, they represented the first major outage in eight years.
But the LSE outage has to result in traders rethinking their risk management. Traders must rethink their reliance on a single system that does neither offer the software nor hardware architecture giving dependability levels and redundancy that are satisfactory.

For instance, assuring that the overloaded system has access to additional processing capacity, fiber optics capacity and redundancy switches is a must. Such an investment is required to assure satisfactory dependability. The LSE is an important public e-communication network for traders and their clients around the globe. Its software and hardware architecture should reflect the importance the LSE has for financial markets.

Lesson 2 – extensive testing required

Besides a lack of software and hardware architecture that such a critical system requires, extensive testing is needed to avoid the problems that happened this Monday. The FTSE 100 ended down sharply on 2008-09-05 (Fri). This forced traders who were betting on further falls in the index to close their positions. In turn, the system experienced a kind of ‘overload’.

An exercise that just tests parts of the system will not cut it. Instead, the full system such as user access, trade execution and database redundancy and overload must be part of such an exercise that tests the systems limits as experienced with Friday’s trading spike.

In this case, it seems as if no such comprehensive testing had been done previous to Friday. The Friday spike in trades lead to the crash on Monday. LSE outage demonstrates that extensive testing exercises are required to assure that the system is dependable enough to deal with trading spikes.

Lesson 3 – tried and tested recovery procedures are a must

Extensive testing based on realistic simulations using the full system also helps in testing if recovery procedures put in place work properly

The trading overload happened on Friday and system monitoring indicated that not all was working properly already at that time. Unfortunately, the LSE did not have the right procedures in place that allowed it to fix the problem in the 62 hours (19:00 hours after all was done – Friday evening through Monday 9:00 hours) the weekend gave it.

Tried and tested recovery procedures are needed to avoid such problems, LSE management failed here by not insisting to have such procedures in place while providing the budget needed.

Lesson 4 – accurate and truthful information is the minimum one should expect

LSE’s communication regarding this outage has been poor if not disastrous. For starters, LSE responded very slowly in making a decision about wheter it was an orderly market or not. Once it closed down the system around 9:00 hours, it failed to put up information on its web site. Even 2008-09-11 (Thu) you cannot find any information about this seven-hour outage in the press section of the LSE:

LSE press releases

When such a calamity happens, assuring the continued confidence and trust customers have in your brand requires that you communicate accurately, quickly and truthfully about the problem. LSE failed the test.

Lesson 5 – new platforms highly dependent on LSE for reference price

Measured by the value traded, Chi-X looks to have been down by a third on the previous business day which was 2008-09-05 (Fri). Turquoise (PS. experienced a breakdown last week) and ITG were similarly affected.

The above illustrates that traders and clients were reluctant to deal on these new platforms without a reference price available through the LSE trading board. Monday’s problems demonstrated that these exchanges are relying on the LSE to set the market price – in other words – dependability of alternative exchanges upon the LSE board cannot be underestimated.

Hence, the LSE outage affects many trades beyond those made by its customers but also those trading on new platforms


We can now move on to business as usual or take the necessary steps that improve the LSE’s trading systems’ dependability and resilience when it comes to such things as trading overloads and so on.

Customers should surely ask for the communication strategy of the LSE to be changed. Mos important, monitoring the performance of individual components, like databases, software and servers will not do. Instead, regular exercises are needed that:

1) generate the required artificial ‘load’ in order to test the full trading system for instances with high volume activity – as happened last Friday, AND

2) implement tried and tested recovery procedures that work – a weekend is enough to fix the problem – LSE failed to fix the problem over the weekend

So will we learn from this incident or are we willing to have it again sometime down the road with effects that we might be unable to imagine today? For instance, what will it do to financial markets worldwide if London – or any important exchange – crashes again?

The costs to the economy must be exhorbitant.


The LSE’s platform runs on TradElect, a proprietary system developed by Accenture using Microsoft .Net and SQL Server systems. It is about 15 months old and has been running since June 2007.

LSE has touted that the system will enable it to expand and speed up its capacity for trades. September 2008 was the deadline given for the system to show that it can reach and handle without problems 10,000 continuous messages per second.

So far it cannot handle it and I am not aware of any Microsoft software supported solution that is currently able to do this.

LSE moved IT management for TradElect in-house from supplier Accenture. The latter still provides software development services to LSE.

As well, due to this outage I am not sure if TradElect will be able to offer trades in Italian equities this month, as planned. Of course, as you know LSE acquired Borse Italia in 2007.


→ 3 CommentsTags: exchanges · lesson · monday · outage · resilience · traders · trading

Dependability of public e-communication networks – ropes to skip – introduction

September 6th, 2008 · No Comments ·

    Resilience describes the ability of communications networks in providing and maintaining acceptable level of service in the face of various challenges to normal operations.

More and more we live in world where the use of information communication technologies is becoming ever more pervasive in our daily chores. Accordingly, dependable communications networks and services are now critical to public welfare and economic stability.

Attacks against that infrastructure or natural disaster might reveal increased vulnerabilities and dependabilities of our societies.

So how much do we depend on our communications networks?

Dependability (click on link – choose Login as Guest – free access to definition) is a multi-faceted concept. One can classify such types of failure and arrive at an identification of a set of attributes (or characteristics) of dependability, the main ones of which are:

  • availability,”readiness for correct service,”
  • reliability – “continuity of correct service,”
  • safety – “absence of catastrophic consequences on the user(s) and the environment,”
  • integrity – “absence of improper system alterations,”
  • maintainability – “ability to undergo modifications, and repairs.”

(for more definitions see European Union – SecurIST Advisory Board
IT risk, threat & vulnerability mitigation
(click on link – choose Login as Guest – free access to definition)

In the following weeks and possibly months I will focus on this issue. In particular some of the greatest obstacles societies might face when wanting to reach better resilience against attacks will be addressed. Since the world has become a global village, any country, acting independently, may face difficulties in effectively preventing and responding to internet-based attacks that can often originate outside national borders.

Without putting one’s own house in order, however, things are unikely to improve for a state and its citizens. In other words, acceptable dependability levels are unlikely to be reached through international collaboration if one’s own department or house looks like a disaster area. Accordingly:

The level of resilience accomplished is the outcome or result of an acceptable level of dependability of a public e-communication network.

A first step is to get one’s own house in order.

Once national efforts have resulted in a higher level of dependability resulting in greater resilience for the e-communication network, international collaboration can help in improving risk management.

So what are some of the obstacles that societies face in trying to achieve this outcome?

Stay tuned, you can read obout all here in upcoming posts.


→ No CommentsTags: dddd

CFTC accuses Optiver of ‘banking’ the close

July 30th, 2008 · 2 Comments ·

    The issue of e-discovery and market manipulation in the oil sector has been thrown into sharp focus by a CFTC lawsuit.
    US Commodity Futures Trading Commission, CFTC, charged 3 employees from Optiver Holding BV and two of its subsidiaries. All parties are accused of having manipulated the settlement price at the end of the trading day on Nymex.

What is the issue

The US Commodity Futures Trading Commission (CFTC) alleged that Optiver Holding BV, 2 of subsidiaries and 3 high-ranking employees had manipulated prices of crude oil, heating oil and gasonline future contracts on the New Zork Mercantile Exchange at least 5 times during 2007-03.

Complaint: Optiver US, LLC, et al.

What are the charges?

The CFTC claims the defendants engaged in market manipulation. Gerben Goojiers, a trader at Optiver, had a conversation on 2007-03-05 with a colleague:

    “I’ll look into improving the hammer tool tomorrow, so that we can do this with a fin3er frequency so that should give us a better chance of being up front again”

What is the hammer: a trading strategy used to influence the settlement price at the end of the trading day on Nymex. This involves taking big positions just before close to manipulate prices. Optiver allegedly attempted a similar scheme at least once in ICE Futures Europe, the London-based energy exchange. However, it seems as if it was unable to influence the sttlement price on ICE Futures Europe.

Optiver BV posted a small press release in response to these charges

    CFTC Matter 25 July 2008 We have learned recently that the United States Commodity Futures Trading Commission filed a civil lawsuit against Optiver. We have received a copy of the complaint and are reviewing it.Optiver takes the Commission’s action very seriously, and is treating it with utmost attention and care.We take tremendous pride in our longstanding reputation for integrity.We believe that we have run our business by doing not only what is best for the bottom line, but what is right, and we will continue to do so.Obviously, we cannot comment further nor answer any news media questions concerning the complaint until we and our legal counsel, Schiff Hardin, have had the opportunity to review the charges fully.

How is the CFTC trying to make its case?

The CFTC is laying out its case with several telephone recordings as well as e-mails. In these, 2 of the defendants had discussed the ‘fairy story’ they would give regulators in case Optiver was investigated for manipulation.

Traders allegedly said that they were intending to trade a volume sufficient to manipulate prices… while refraining to do it that massively that CNBC or CNN would report it.

What it means for you and Infosec?

Under the influence of the Internet, you cannot say anything to anyone that won’t be heard by the regulator eventaully. In turn, this could get you into trouble down the line.

According to the CFTC claim, Mr Stepanek wrote in an e-mail:

    “Have you thought about just doing the crude massively big? I think with 6,000 or so you can even move that one, like we-re doing with the minute marker. Obviously the gasonline is just a great product for it. Could this be considered market manipulation? As long as we don’t trade against ourselves, everything seems above board, but I still think the exhcange might start to look into it.”

alleged manipulative conduct

So your staff should be aware that any phone call or e-mail correspondence could ultimately be used by a regulator to enforce rules that might have been broken.

The CFTC acknowledges assistance in this investigation from the UK’s Financial Services Authority and the New York Mercantile Exchange.

Please check out:
follow Infosec on Twitter InfoSec and Twitter – ropes to know #2 e-discovery – how it works and what it means for your enterprise
be the first to know – subscribe Regulation that matters – e-discovery and BP litigation regarding safety failures in the U.S. – BP goes greener or just meaner? The seven deadly sins of archiving digital information


→ 2 CommentsTags: cftc · crude · gasonline · heating · issue · mercantile · optiver · zork

DNS authentication vulnerable to cache poisening

July 22nd, 2008 · No Comments ·

    Affects Windows, Unix/Linux, Juniper and Cisco products. This was first mentioned around July 7. But now we have a full discussion about responsible disclosure.
    Why does this vulnerability matter and why should it be out into the open?
    To make our DNS system more secure against such and other types of attacks for sure and because seucrity through obscurity is just plain dumb.

What is the technical issue

Vulnerable DNS software may be tricked into supplying incorrect IP addresses for requested hostnames. This could allow an attacker to redirect internet traffic to a possibly malicious website.

In practice this could work if an attacker manages to flood a DNS server with multiple requests for domain names. One example might be, and so on. Since the name server may not have seen these requests before it proceeds to query a root server for the name server that handles lookups for domains ending in .com. In turn, the attacker proceeds by using the information to send fraudulent lookup information to the DNS server and make it appear as if it came from the authoritative .com name server. Unfortunately, once enough requests have been issued, one of these spoofed requests could just match and the IP address for a requested domain will be falsified.

How new is this?

The issue is not that new since a first alert has been issued around July 8-10 by some alerting services not providing many details.

Note the description section of SecPod ID: 10109 Multiple Vendor DNS Spoofing Vulnerability This advisory alludes to the critical element of bogus additional RRs. To our best knowledge it appears to be the only one that mentioned that problem.

What is the discussion right now – someone posting code

The question now is when a full description will be provided by someone posting working code? The exploit code has been available in certain circles for some time now. By posting the vulnerability and making the details available, responsible security engineers would then have the possibility to check if the problem exists regarding their systems and DNS handling.

Security advisories like the one below we got this morning are of little help because they do not provide a full description:

AL-2008.0080 — [Win][UNIX/Linux][Juniper][Cisco] — Multiple DNS implementations vulnerable to cache poisoning

Neither do the solutions suggested take care of the problem described in the above advisory

The underlying premise of all the discussions is that we have to keep the details under wraps until Dan Kaminsky can present his findings at the Black Hat conference. Naturally, some researchers have posted some thoughts about this issue including a possible exploit of this vulnerability:
On Dan’s request for “no speculation please”

However, keeping information like a DNS spoofing vulnerability under wraps, while the bad guys know the details already is neither responsible disclosure nor helping in better protecting the internet. It actually increases our risks while helping Dan Kaminsky and Black Hat to get publicity. Is this ethical and responsible behavior?

Please check out:
follow Infosec on Twitter CyTRAP Labs – advisory – Versatel, Vivendi and Tele2 – vulnerability fixed DNSSEC – will the Trust Anchor Repository (TAR) make a different
be the first to know – subscribe DNSSEC – a global effort that experiences difficulties on the road to success 2 disclosure when a zero-day exploit is in progress


→ No CommentsTags: attacks” · contentless · counts · dumb · flawed · hazards · obscurity · plain

InfoSec and Twitter – ropes to know #2

July 15th, 2008 · 5 Comments ·

Twitter – a sort of micro-blogging or SMS messaging that allows one to reach many people quickly, is becoming ever more popular.

Regulation is clear, firms must be able to produce digital files. Now a U.S. Member Congress is using Twitter. Will Congress or the elected officials be able to produce tweets on Twitter in a case of e-discovery or will the judge throw the book at them?

What about the risk against your corporation’s reputation and brand coming from these social networks and social media? What about threats and vulnerabilities?

We tell you what you should watch out for in this post.

A while back we did this

InfoSec and Twitter – ropes to know #1

In the above post amongst other things we pointed out that:

    As we explore this wonderful world of cyberspace, we must beware that that whatever we say, write or sing about is being recorded.

This does, of course, also apply to your workforce or colleagues using Twitter at work or from home. Just in case you thought it does not apply, usage numbers are growing rapidly in the U.S. and Europe, as well as Japan.
2008-07-08 – Twitter growth continues despite outages

The above graph from a firm called Hitwise shows that Twitter usage is growing. This growth happens despite the several outages Twitter has had during the last 10 weeks and will likely continue to have in the next few months.

What does this mean for compliance

Federal Rules of Civil Procedure render electronic communications­from both defendants named in a lawsuit and third parties who may have information pertaining to the case ­admissible in court.

Imagine the case where a subpoena said,

    ‘Here are the 60 named plaintiffs in these lawsuits. For those 60 plaintiffs, give us their tweets or Twitter messages during the course of the meeting or workday’.

Such a request might be approved by the court. Unfortunately, this raises a lot of privacy concerns that one does not necessarily must deal with if the company is just trying to get the things from the people involved in the lawsuit.
Text messages, tweets (140 character messages people send using Twitter) are likely viewed the same way as other forms of digital communication, such as e-mail.

Unfortunately, we also have to beware that whatever we say is recorded. In turn, when your employer is being called by the court to produce something, it is incumbent on you to produce it.

Case study – U.S. Congress using Twitter

This is a no starter if not a no brainer. People will use it from their private smartphone , iPhone and so on during the workday and when off work. Talking about work as well as private matters.

A very legalese debate is happening in the US Senate over the use of Twitter and QIK by senators:

Is the House going to limit the free speech of its own members?

U.S. Rep. Michael Capuano, Chairman of the Congressional Commission on Mailing Standards, is aware that using Twitter to reach one’s constitutency means taking advantage of the social media. In fact, U.S. Congressmen John Culberson (R-TX) is using Twitter and QIK these two similar types of message tools for communicating freely with the public. Naturally, by being the first he is racking up all the positive public image points that entails making him even better known, something that might come in handy during re-election time.

One could suggest that all congressmen should emulate this behavior. Naturally, this would require change of some existing House rules that actually forbid members of Congress from posting “official communications” on other sites (webpages, Twitter, and so on).

What does it mean for the U.S. Congress and the European Parliament?

For an elected politician, may it be a minister, senator, member of the European Parliament or an EC Commissioner such as Viviane Reding who loves to Twitter, maybe :-), sending out tweets to followers – such as one’s constituents – can keep them informed about the latest developments happening on the floor (e.g., how a bill is doing).

Nopnetheless, some legal barriers may have to be removed in some countries to allow elected officials to use social media effectively for the benefit of their constituents.

Also,e-discovery requires that you are able to produce evidence during the discovery process such as e-mails. I am just waiting for the first judge to ask for evidence regarding accused parties’ tweets on Twitter. What will happen if they cannot be produced?
What is your take on this issue?

Please check out:
follow Infosec on Twitter be the first to know – subscribe Early Warning System – what works and how Twitter and Facebook can help
follow Congressmen Culberson on Twitter
Warren Buffet – ropes to skip – c-level blogs – FAQ Regulation that matters – e-discovery in Intel litigation


→ 5 CommentsTags: case­admissible · digg · lawsuit · named · plaintiffs · pretending · protest · twitter

EU telecoms market – European Parliament – what it means for InfoSec

July 9th, 2008 · No Comments ·

    Two of the European Parliament’s committees have watered down proposed EC legislation regarding Europe’s telecoms markets.

    The proposed changes have implications for telecoms regulators administering legislation and assuring compliance regarding resilience of public ecommunication networks and what telcos can and cannot do (e.g., data security breach, mobile number portability, etc.).

Some time back we informed our Twitter followers what happened on 2008-07-07, the day two of the European Parliament’s committees, namely:

Industry, Research and Energy Committee (ITRE) and the
Internal Market and Consumer Protection Committee (IMCO)

voted on the European Commission’s proposals to reform the EU Telecom rules. Check out the link here:

Important here is that Industry Committee also approved a report by Pilar del Castillo (EPP-ED, ES), which proposes setting up a Body of European Regulators in Telecommunications (BERT), composed of the 27 national regulatory authorities, as an alternative to the European Electronic Communications Market Authority (EECMA) advocated by the Commission.

What it means for InfoSec

The compromise proposal put forward by the ITRE Committee, Catherine Trautmann and Pillar del Castillo Vera as well as the IMCO Committee, Malcolm Harbour for the draft framework directive was accepted. This brings in a few changes and 30 amendments. Several things are of interest to people in InfoSec

a) ENISA‘s mandate is being extended until 2012 – if this means that the agency will exist beyond or move in the meantime from Crete somewhere else is not known at this time. However, some members felt that a move might have to be considered as well to a more central location to improve accessibility of ENISA and its human capital to Member States and regulators.

b) The EC had proposed some stringent rules regarding data loss and data security breaches. These would have required telcos and internet service providers to meet stringent guidelines and to inform consumers in case privacy of data would have been breached. This proposed part of the legislation was watered down by the committee, something that is really unfortunate.
c) BERT was suggested as an alternative mechanism to EECMA. How such a body will be effective in its daily operations is, however, very questionable. Members of Parliament discussing the EC proposal failed to go into any detail how this would work out in practice (how will compliance be checked, enforced or best practices evolve to make BERT effective)


Even though the final view of the European Parliament will only be known once the Plenary has voted on the Commission proposal – 2008-09-03 is to be the day – the votes in ITRE and IMCO are important steps towards shaping the final legislative texts to be adopted by the European Parliament and the Council.

Also check out this:

2008-07-08 – European Parliament – press release: Telecoms package: EU-wide spectrum management for full benefits of wireless services
Votes by the two Committees – European Parliament – Telecoms package: EU-wide spectrum management for full benefits of wireless services
the wrong way – online privacy and social networks EU telecoms market – European Parliament – what it means for InfoSec


→ No CommentsTags: 2012 · committees · infosec · parliament · regul · regustand · stays · watered