Hacking Voting Machines
I came across this article while perusing Security Focus' Website. It talks about how insecure the electronic voting machines that Diebold, an Ohio company, manufactures. Pretty interesting stuff, though there are links in the article to anti-war sites, and near the end the column takes on an all-too-often "Linux is great, everything else sucks" sort of tone when referring to electronic voting machines in Australia.
I was surprised to read that voting results were transmitted across the internet.
Pretty frightening to me.
Tuesday, November 25, 2003
Monday, November 24, 2003
The Hell of Blackouts
An interim report on the August 14th US-Canada blackout was recently released. The document is over 130 pages, and talks about several causes of the blackout, but the most interesting thing is that it seems that when it started no one knew what was happening due to computer malfunctions.
The report starts with an Executive-type review of the way the systems interact due to the difficulty of storing and transmitting electricity. One of the inaccuracies in the report state that electricity travels at the speed of light. I had myself been taught 250mph by one of my Ohio State University Physics Professors, but it appears that is wrong as you can read about here and here. It is interesting, but dry. I don't blame the report writers for not being so accurate about a scientific fact, due to their final target audience, but it makes one wonder what else they 'glossed over' in their 'interim' report.
The main computer system that monitors the electrical grid for FirstEnergy (FE) in Ohio (just a few hours north of where Jack lives, and the start of the blackout) is the GE Harris XA/21 EMS system. According to the documentation, it is a UNIX based system that uses the TCP/IP network protocols (the same ones you use everyday on the internet), ODBC (Open DataBase Connectivity) standards to a SQL (Standard Query Language) POSIX-compliant Database backend. The system is programmed in ANSI C and FORTRAN.
What this essentially means, and as in indicated in the brochure, is that it uses "Open Systems". Which is industry standard protocols and programming interfaces that allow any other types of systems to connect to it.
It's kind of how the Internet works.
Pretty much everything on the Internet uses "Open Standards", or you'd be downloading a new program everytime you visit a new website.
Now, for all of you Conspiracy Theorists, time to get out your foil hats. (I've been harping on the foil hats a lot lately).
James over a Hell In A Handbasket tends to "pooh-pooh" the possible threats of a cyberattack, but I think this is a case that proves it could do a lot of damage if launched against the right targets.
The shit really started to hit the fan at 12:15 PM ESDT, about 3 hours before the blackout.
Oh, did I mention that the FE's GE XA/21 systems' software hadn't been updated since 1998? Guess how many Unix-type operating system vulnerabilities have been released in that 5-year period? Lots. Who knows what other modules the system was running? But I digress.
Anyway, just after Noon, one of the monitoring systems quit working due to "inaccurate data" (buffer overflow anyone?). However, no one at the main control center knew it. This caused another large generation unit in Eastlake to shutdown around 1:30 PM, and around 2:15 PM the alarm and logging computer system (that darned XA/21) was completely dead and useless. At 3:05 the whole blackout started and quickly put millions of people into darkness.
We're lucky that more people didn't end up hurt during that outage.
Losing the Eastlake plant itself didn't cause the blackout, but because the computer system was FUBAR'd, no one knew what was going on. The report says that the fact that operators were unaware of what was going on due to the computer failure, and the lines falling into trees were the two main reasons for the blackout.
OK - It wasn't that no one knew what was happening, in fact one of the employees of FE called around to get some things reconfigured to support the high-load that was happening that day, but because of the monitoring system failure, he wasn't working with enough information. In fact, someone figured out that a monitoring device had failed, and turned the system off to correct the error, but then went to lunch, forgetting to turn the monitoring system back on. Even though the monitors run every 5 minutes, no one noticed it wasn't working right until an hour-and-a-half later.
So someone turned it back on.
But by now the data that was coming across was bad, and while a systems engineer identified the possible problem with the grid at about 2 PM and finally called the main operator an hour later, the main operator mistakenly saw that everything was running fine. It took another 20 minutes to get that straightened out, and then another 20 minutes to get the system reporting everything correctly.
That was 2 minutes before it all went to hell.
You see, about 2 hours before that, the alarm and logging system had went down.
At about 2:14, the system wasn't reporting anything of any use. In the next 30 minutes, FE lost the primary and backup server completely. Both systems died? The report doesn't say conclusively how they failed (though some theories are discussed later).
But guess what? No one monitoring the system noticed the servers had crashed for an hour.
Guess Homer had too many donuts that day.
AEP had even called FE to report problems, but of course since the system was down, FE reported no alarms to logged problems. DOH! The backup server had failed 13 minutes after the primary server, but still no one noticed.
Well, no one WORKING noticed.
The system did automatically page the IT staff.
Everyone who works at the building with IT staff knows that things can go wrong, but the IT staff doesn't tell anyone, other than "we've got a system down and we're working on it".
Don't want to look bad, ya know?
The report supposes that data "overflowed the process' input buffers" (see buffer overflow above) in the system, which caused the alarm system failure. This means that neither the server or the remote terminals spewed out any data about the grid problems. Oops.
Since the data overflow wasn't stopped, when the system transferred over to the backups, the backup servers failed as well under the data load.
This overflow, as it was happening, caused the refresh rate on the operator's screen to refresh only once every minute, as compared to every 1 to 3 seconds as normal. These screens are also "nested" underneath the top level screens that the operators view, thus slowing things down to a crawl.
By now the IT guys arrived, and "warm booted" (reboot without power off) the systems. The IT guys checked the servers and saw that all was good, but never verified with the control room operators that the alarm system was functioning again.
"Just reboot it, and we can go home guys, no one will notice that anything major was wrong".
What's interesting is that the operators hadn't noticed the real problem. They hadn't called about the alarm system problem until about an hour after the IT staff started working on things (and had 'fixed' it 30 minutes before).
The alarm system displays had "flat-lined" (didn't go to zero, but just stayed at where they had been at the point of failure, which would be unusual due to normal voltage changes in the grid) and no one seemed to notice or care.
Once they did figure out what was wrong, it was too late. The cascade had started, and the operators didn't want the IT staff to "cold-boot" (power off and restart) all the systems, because they were afraid that they wouldn't have any data after that, even though what they had was pretty useless.
The rest is history.
I don't know if these systems are connected in any way to the Internet, but I'd be surprised if they weren't. 100% isolation of a private network is difficult to maintain these days. Someone somewhere always hooks something up to help them get easier access to resources they need. If someone mounted a concerted effort against utility and power systems through these connections, it would be easy to see how it could get many people hurt or killed.
It's all the computers fault.
Really.
An interim report on the August 14th US-Canada blackout was recently released. The document is over 130 pages, and talks about several causes of the blackout, but the most interesting thing is that it seems that when it started no one knew what was happening due to computer malfunctions.
The report starts with an Executive-type review of the way the systems interact due to the difficulty of storing and transmitting electricity. One of the inaccuracies in the report state that electricity travels at the speed of light. I had myself been taught 250mph by one of my Ohio State University Physics Professors, but it appears that is wrong as you can read about here and here. It is interesting, but dry. I don't blame the report writers for not being so accurate about a scientific fact, due to their final target audience, but it makes one wonder what else they 'glossed over' in their 'interim' report.
The main computer system that monitors the electrical grid for FirstEnergy (FE) in Ohio (just a few hours north of where Jack lives, and the start of the blackout) is the GE Harris XA/21 EMS system. According to the documentation, it is a UNIX based system that uses the TCP/IP network protocols (the same ones you use everyday on the internet), ODBC (Open DataBase Connectivity) standards to a SQL (Standard Query Language) POSIX-compliant Database backend. The system is programmed in ANSI C and FORTRAN.
What this essentially means, and as in indicated in the brochure, is that it uses "Open Systems". Which is industry standard protocols and programming interfaces that allow any other types of systems to connect to it.
It's kind of how the Internet works.
Pretty much everything on the Internet uses "Open Standards", or you'd be downloading a new program everytime you visit a new website.
Now, for all of you Conspiracy Theorists, time to get out your foil hats. (I've been harping on the foil hats a lot lately).
James over a Hell In A Handbasket tends to "pooh-pooh" the possible threats of a cyberattack, but I think this is a case that proves it could do a lot of damage if launched against the right targets.
The shit really started to hit the fan at 12:15 PM ESDT, about 3 hours before the blackout.
Oh, did I mention that the FE's GE XA/21 systems' software hadn't been updated since 1998? Guess how many Unix-type operating system vulnerabilities have been released in that 5-year period? Lots. Who knows what other modules the system was running? But I digress.
Anyway, just after Noon, one of the monitoring systems quit working due to "inaccurate data" (buffer overflow anyone?). However, no one at the main control center knew it. This caused another large generation unit in Eastlake to shutdown around 1:30 PM, and around 2:15 PM the alarm and logging computer system (that darned XA/21) was completely dead and useless. At 3:05 the whole blackout started and quickly put millions of people into darkness.
We're lucky that more people didn't end up hurt during that outage.
Losing the Eastlake plant itself didn't cause the blackout, but because the computer system was FUBAR'd, no one knew what was going on. The report says that the fact that operators were unaware of what was going on due to the computer failure, and the lines falling into trees were the two main reasons for the blackout.
OK - It wasn't that no one knew what was happening, in fact one of the employees of FE called around to get some things reconfigured to support the high-load that was happening that day, but because of the monitoring system failure, he wasn't working with enough information. In fact, someone figured out that a monitoring device had failed, and turned the system off to correct the error, but then went to lunch, forgetting to turn the monitoring system back on. Even though the monitors run every 5 minutes, no one noticed it wasn't working right until an hour-and-a-half later.
So someone turned it back on.
But by now the data that was coming across was bad, and while a systems engineer identified the possible problem with the grid at about 2 PM and finally called the main operator an hour later, the main operator mistakenly saw that everything was running fine. It took another 20 minutes to get that straightened out, and then another 20 minutes to get the system reporting everything correctly.
That was 2 minutes before it all went to hell.
You see, about 2 hours before that, the alarm and logging system had went down.
At about 2:14, the system wasn't reporting anything of any use. In the next 30 minutes, FE lost the primary and backup server completely. Both systems died? The report doesn't say conclusively how they failed (though some theories are discussed later).
But guess what? No one monitoring the system noticed the servers had crashed for an hour.
Guess Homer had too many donuts that day.
AEP had even called FE to report problems, but of course since the system was down, FE reported no alarms to logged problems. DOH! The backup server had failed 13 minutes after the primary server, but still no one noticed.
Well, no one WORKING noticed.
The system did automatically page the IT staff.
Everyone who works at the building with IT staff knows that things can go wrong, but the IT staff doesn't tell anyone, other than "we've got a system down and we're working on it".
Don't want to look bad, ya know?
The report supposes that data "overflowed the process' input buffers" (see buffer overflow above) in the system, which caused the alarm system failure. This means that neither the server or the remote terminals spewed out any data about the grid problems. Oops.
Since the data overflow wasn't stopped, when the system transferred over to the backups, the backup servers failed as well under the data load.
This overflow, as it was happening, caused the refresh rate on the operator's screen to refresh only once every minute, as compared to every 1 to 3 seconds as normal. These screens are also "nested" underneath the top level screens that the operators view, thus slowing things down to a crawl.
By now the IT guys arrived, and "warm booted" (reboot without power off) the systems. The IT guys checked the servers and saw that all was good, but never verified with the control room operators that the alarm system was functioning again.
"Just reboot it, and we can go home guys, no one will notice that anything major was wrong".
What's interesting is that the operators hadn't noticed the real problem. They hadn't called about the alarm system problem until about an hour after the IT staff started working on things (and had 'fixed' it 30 minutes before).
The alarm system displays had "flat-lined" (didn't go to zero, but just stayed at where they had been at the point of failure, which would be unusual due to normal voltage changes in the grid) and no one seemed to notice or care.
Once they did figure out what was wrong, it was too late. The cascade had started, and the operators didn't want the IT staff to "cold-boot" (power off and restart) all the systems, because they were afraid that they wouldn't have any data after that, even though what they had was pretty useless.
The rest is history.
I don't know if these systems are connected in any way to the Internet, but I'd be surprised if they weren't. 100% isolation of a private network is difficult to maintain these days. Someone somewhere always hooks something up to help them get easier access to resources they need. If someone mounted a concerted effort against utility and power systems through these connections, it would be easy to see how it could get many people hurt or killed.
It's all the computers fault.
Really.
Monday, November 17, 2003
Now Jack's Heard Everything
It's getting harder and harder to keep computer systems patched with the latest updates and fixes. It's a real problem of resource management in many IT shops today.
So now we're told that we need to watch out forvirus writers from outer space.
I don't believe it. This guy watched Independance Day (ID4) too many times. You know, when Jeff Goldblum took his Mac iBook up to the mothership and introduced a virus in their system that caused havok.
Heck, we can't even get our own systems to integrate correctly. How's an alien species going to hack our operating systems without knowing anything about them.
You know Mr. Carrigan wears a tin foil hat to go along with his tin foil wallpaper.
Sheesh.
It's getting harder and harder to keep computer systems patched with the latest updates and fixes. It's a real problem of resource management in many IT shops today.
So now we're told that we need to watch out forvirus writers from outer space.
I don't believe it. This guy watched Independance Day (ID4) too many times. You know, when Jeff Goldblum took his Mac iBook up to the mothership and introduced a virus in their system that caused havok.
Heck, we can't even get our own systems to integrate correctly. How's an alien species going to hack our operating systems without knowing anything about them.
You know Mr. Carrigan wears a tin foil hat to go along with his tin foil wallpaper.
Sheesh.
Thursday, November 06, 2003
Matrix Revolutions (No Spoiler)
Just finished seeing it. What is nice about the series is that you can basically watch the first one, and the story is finished. The rest is just a wild acid trip. I've been drunk, but never stoned - though now I think I have an appreciation as to what it feels like.
As far as Matrix Revoltions go, I only have three letters to say about it:
Just finished seeing it. What is nice about the series is that you can basically watch the first one, and the story is finished. The rest is just a wild acid trip. I've been drunk, but never stoned - though now I think I have an appreciation as to what it feels like.
As far as Matrix Revoltions go, I only have three letters to say about it:
(Explanation: First one is in "Matrix Code" Font, Second is in "Matrix Schedule" font.)
Tuesday, November 04, 2003
Cool Weaponry
I probably should have forwarded this on to Anna for comment, as in my perusing I was directed to an article about microwave weapons the military is developing.
It's really cool. Especially the part about how the directed beam versions have to be "pulsed", unless you want to create a bit of white-hot plasma.
Ahhhh, makes one think of one of my favorite Arnie scenes:
The Terminator: The .45 Long Slide, with laser sighting.
Alamo Guns Clerk: These are brand new; we just got these in. That's a good gun. Just touch the trigger, the beam comes on and you put the red dot where you want the bullet to go. You can't miss. Anything else?
The Terminator: Phased-plasma rifle in the forty watt range.
Alamo Guns Clerk: Hey, just what you see, pal.
Sounds like Plasma-rifles are just around the corner.
I probably should have forwarded this on to Anna for comment, as in my perusing I was directed to an article about microwave weapons the military is developing.
It's really cool. Especially the part about how the directed beam versions have to be "pulsed", unless you want to create a bit of white-hot plasma.
Ahhhh, makes one think of one of my favorite Arnie scenes:
The Terminator: The .45 Long Slide, with laser sighting.
Alamo Guns Clerk: These are brand new; we just got these in. That's a good gun. Just touch the trigger, the beam comes on and you put the red dot where you want the bullet to go. You can't miss. Anything else?
The Terminator: Phased-plasma rifle in the forty watt range.
Alamo Guns Clerk: Hey, just what you see, pal.
Sounds like Plasma-rifles are just around the corner.
Hats, Caps, Stetsons, Fedoras = Linux
The big news in the Linux world is that RedHat is no longer going to support the current versions of its RedHat Linux (6.x, 7.x, 8.0, and 9.0) after this coming April. Some think this means the sky is falling.
Of course, as this reply to the Slashdot article says, its hardly that. In fact, as several of the linked articles state, RedHat is pushing people towards Fedora, which is basically the beta of the next version of RedHat. I'm looking forward to the changes, as Fedora is supposed to be more "bleeding edge" with updates, something 'normal' RedHat Linux was slow to adopt, because of the testing that goes into a product that a commercial company charges for.
If you read the Zone-H article, it would seem that its the end for "free software" ala Linux. It's time to move to some version of BSD or other 'free' distributions of Linux. Of course, there are literally hundreds of available distros for Linux. The author of the article even begins whining...well you read it:
"WhiteHat should be the 'good' hackers, while 'BlackHat' the bad ones (the bad guys). What does RED stands for ? If you hope it was meant for communism.... it looks dramaticaly just like the passage from Lenin to Stalin: from revolution, spirit of freedom and unity of people, to just another dictatorship. Thank you RedHat."
So Communism is good, Dictatorship is bad.
Actually, both are bad. Which is why Linux is going more commercial. Find something that is useful. Improve it, like RedHat (or any of the other distros compiled by commercial companies) did, and then charge for your efforts. It's the way commerce works.
But many of these Linux zealots seem like they are straight out of the 60's with communal farming = community programming.
It just doesn't work in the long run.
Since the core of Linux will always be free unless the GPL is revoked, anyone has the ability to roll your own. So quit whining, you want free stuff? Build it yourself. You want a nice packaged deal that does all the work for you? Pay the people who take the time to do it. Since the Fedora project takes input from the users and developers who get it for free, they are paying for the distro with the labor. It's still not free.
The big news in the Linux world is that RedHat is no longer going to support the current versions of its RedHat Linux (6.x, 7.x, 8.0, and 9.0) after this coming April. Some think this means the sky is falling.
Of course, as this reply to the Slashdot article says, its hardly that. In fact, as several of the linked articles state, RedHat is pushing people towards Fedora, which is basically the beta of the next version of RedHat. I'm looking forward to the changes, as Fedora is supposed to be more "bleeding edge" with updates, something 'normal' RedHat Linux was slow to adopt, because of the testing that goes into a product that a commercial company charges for.
If you read the Zone-H article, it would seem that its the end for "free software" ala Linux. It's time to move to some version of BSD or other 'free' distributions of Linux. Of course, there are literally hundreds of available distros for Linux. The author of the article even begins whining...well you read it:
"WhiteHat should be the 'good' hackers, while 'BlackHat' the bad ones (the bad guys). What does RED stands for ? If you hope it was meant for communism.... it looks dramaticaly just like the passage from Lenin to Stalin: from revolution, spirit of freedom and unity of people, to just another dictatorship. Thank you RedHat."
So Communism is good, Dictatorship is bad.
Actually, both are bad. Which is why Linux is going more commercial. Find something that is useful. Improve it, like RedHat (or any of the other distros compiled by commercial companies) did, and then charge for your efforts. It's the way commerce works.
But many of these Linux zealots seem like they are straight out of the 60's with communal farming = community programming.
It just doesn't work in the long run.
Since the core of Linux will always be free unless the GPL is revoked, anyone has the ability to roll your own. So quit whining, you want free stuff? Build it yourself. You want a nice packaged deal that does all the work for you? Pay the people who take the time to do it. Since the Fedora project takes input from the users and developers who get it for free, they are paying for the distro with the labor. It's still not free.
Monday, November 03, 2003
Trust No One!
That was the mantra of an old favorite Role Playing Game of mine, "Paranoia". (Before they ruined it with the 2nd edition)
So, back to the point.
I'm working on a client's computer this weekend. It has two problems, CPU Utilization in Windows XP is a constant 100%, and Microsoft Word would not open any files. So I start poking around with the obvious things. Spyware and Viruses.
The computer already has Norton Internet Security on it (up to date), and the user ran Adaware multiple times. With the CPU being at 100%, I didn't want to try to run anything on it. Besides, if it was compromised, it wouldn't have done any good. So off comes the cover, out comes the hard drive, and in it goes to my forensics workstation, which has several versions of different scanners of different types.
So I run Command Antivirus, Norton Antivirus, Trend Micro's Housecall Web-based Free Scanner, Spybot Search and Destroy, Adaware (again), McAfee's AntiVirus, Grisoft's AVG. Basically, the kitchen sink of scanners.
Nothing.
Didn't find a thing, and CPU was still at 100% when the Hard Drive was replaced.
OK - System process was using 80-90% of CPU time. That usually indicates a device driver using the wrong version (Say for Windows ME, which this machine originally had installed.)
Check all the drivers by hand. All are the Digitally Signed XP versions. Shoot. No dice.
Check the registry (where I should have started). Found buried in an obscure section a reference to 'server.exe' (Sub7 trojan program) and 'systray.exe' where it shouldn't have been (another Trojan). Removed those two files, reboot.
System works fine now.
The date on the Trojans were October 24th, 2003. I took the hard drive out of a system and scanned it in another, yet it never found those two programs (One in C:\ and the other in C:\Windows\System32) even though they were in non-hidden directories. The drive was even formatted in FAT32, so it didn't have anything to do with file permissions or ownership. The Anti-Virus program on the system had been there for 8 months and was kept up to date.
Still feel protected by your Anti-virus programs?
Think again.
Just be careful using your system.
That was the mantra of an old favorite Role Playing Game of mine, "Paranoia". (Before they ruined it with the 2nd edition)
So, back to the point.
I'm working on a client's computer this weekend. It has two problems, CPU Utilization in Windows XP is a constant 100%, and Microsoft Word would not open any files. So I start poking around with the obvious things. Spyware and Viruses.
The computer already has Norton Internet Security on it (up to date), and the user ran Adaware multiple times. With the CPU being at 100%, I didn't want to try to run anything on it. Besides, if it was compromised, it wouldn't have done any good. So off comes the cover, out comes the hard drive, and in it goes to my forensics workstation, which has several versions of different scanners of different types.
So I run Command Antivirus, Norton Antivirus, Trend Micro's Housecall Web-based Free Scanner, Spybot Search and Destroy, Adaware (again), McAfee's AntiVirus, Grisoft's AVG. Basically, the kitchen sink of scanners.
Nothing.
Didn't find a thing, and CPU was still at 100% when the Hard Drive was replaced.
OK - System process was using 80-90% of CPU time. That usually indicates a device driver using the wrong version (Say for Windows ME, which this machine originally had installed.)
Check all the drivers by hand. All are the Digitally Signed XP versions. Shoot. No dice.
Check the registry (where I should have started). Found buried in an obscure section a reference to 'server.exe' (Sub7 trojan program) and 'systray.exe' where it shouldn't have been (another Trojan). Removed those two files, reboot.
System works fine now.
The date on the Trojans were October 24th, 2003. I took the hard drive out of a system and scanned it in another, yet it never found those two programs (One in C:\ and the other in C:\Windows\System32) even though they were in non-hidden directories. The drive was even formatted in FAT32, so it didn't have anything to do with file permissions or ownership. The Anti-Virus program on the system had been there for 8 months and was kept up to date.
Still feel protected by your Anti-virus programs?
Think again.
Just be careful using your system.
Subscribe to:
Posts (Atom)