Tuesday, November 20, 2012

Snoop NFC RFID card with RTL-SDR dongle

It's been a big year for radio fun!
Playing with NFC / RFID tags recently it occurred to me that the RTL-SDR dongles could potentially be used to sniff 13.56Mhz tags.

As it happens the RTL tuner won't quite tune as low as 13Mhz, but.. the first harmonic at 26Mhz works great!

Here's a Mifare Classic 4K card being repeatedly read by an SCL3711 NFC reader. I wedged an antenna next to the reader, fired up SDR-Sharp and here we go...

SCL3711 reader + Mifare 4k + antenna to RTL dongle

Center signal = 13.56Mhz carrier from reader, side spurs = ASK modulated reply from card :-)

Next stop, demodulation and a nice cup of tea. 

Video:


Addendum - while video shows antenna strapped to the card, this setup seems to receive both card+reader signals just fine from 15 feet away!

Later:  Ok never mind the "15 foot" stuff - not true it seems. Because I was running RTL dongle + NFC on the same PC it was coupling the RF signals through the USB lines, making things look very much better than they really were. I tried tag reading with a non-USB-wired Nexus 7 and the antenna range (for the signal from the card) is as "near field" as you'd expect. So; handy and cheap but not ground-breaking :-)

Friday, April 13, 2012

Mongodb pros and cons - scalability, management in production

Here's a nice easy one...

Don't use mongodb, tune your SQL properly, fool!

I experimented with it a while ago and what I found almost exactly matched what this guy found.
Read what he says and believe.


I didn't go anywhere near production with Mongo, cos I found it basically didn't work very well and mysql (when given some attention) we find to be astonishingly reliable and speedy. (we do thousands of queries/sec/mysql box)

I don't normally hate on free sw, but kids are getting their fingers trapped in this one.

Wednesday, March 7, 2012

"Mysql server has gone away" or "Lost connection to mysql server" when using Amazon RDS for for backup or whatev's

Hey,
OMFG what a PITA.
You run long mysql jobs, especially on an amazon RDS instance, especially e.g. backup, and it randomly fails, esp as the db gets bigger, and esp after several hours.
Well FML!
The answer is all over Google; it's either

a) [most common] the "net_write_timeout" variable on the source server (e.g. that you're dumping from); the AWS default is like 60s, which is sensibly cautious if your clients are some crashy bullshit (as hanging result sets/cursors are obviously expensive to keep lying around). 60s is fine for a web app, but if your client is occasionally/randomly gets stalled for a really long time (think; EBS, Innodb reindex if you're streaming direct via a pipe from mysqldump | mysql to do server->server backups; which can be a fine idea depending on yr needs).

OR

b) [less common] the destination server that you're unpacking/writing to has timed out for similar reasons; you were in the middle of providing [usually a metric fuck-ton] of data in an INSERT or UPDATE and the server doing the insert got hung up on something randomly once in a blue moon (e.g. system backup) and boom; the mysql box you're writing too tells you to f-off.
Typical AWS timeout here (net_read_timeout) is like 30s.

Ok so there you go; AWS defaults are sensible for a high-volume web db trying to protect itself from bad clients. Backups occasionally classify as 'bad clients'. FML again.

------THE FIX:

--EITHER (easy,global)
a) Change your RDS instance settings (amazon control panel) to set 'net_read_timeout" (and write) to something bigger globally across all connections. You might pick e.g. 20mins. If you have a lot of crappy/dropping DB connections tbis might be an issue.
--OR-- (usually easy for client code, not for mysqldump)
b) If you're running client sql code (e.g. php,python) simply do "SET net_write_timeout=3600"or whatever (or read_timeout, depending if yr problem is in SELECT or INSERT/UPDATEs) on each conn after you open it and bingo.
The var is set per-connection. This works perfectly for client code but not for mysqldump backups which I'll get to below.
--OR-- (middling, fixes mysqldump)
If your problem is with mysqldump piped dumps vi
e.g.
mysqldump yadda | ... | mysql yadda
(I do this using 'tee' and pipe a copy via gzip into a backup file at the same time I copy a db direct from one host to another making a compressed copy @ same time with no limit on db size. ..actually that is why I'm writing this..)

The issue you see is when the target DB stalls (can happen for e.g 1-10 min at bad times) the source DB times out (quickly; 60s default) on the write socket. This is not AFAIK fixable with normal mysqldump without changing the server's global timeouts for everyone.

The obvious fix is for mysqldump to send "set net_write_timeout=blah" after opening the conn it uses for dumping. Wierdly I cannot make it do that regardless of options, so I hit up the src code.

Basically I patched mysqldump so it sends "net_write_timeout=somebigtime" to the source server AND prefixes "set net_read_timeout=blah" into the SQL dump it outputs.

This solves both ends of the problem, especially when piping from one server to another with msqldump | mysql - both source and dest servers set the timeout _temporarily on that connection_ to a nice long value, e.g. 1hr.


Fixing mysqldump so it sends "net_write_timeout"

So I basically grabbed the mysql src and compiled it (google), on an aws box.
(actually "yum install mysql-devel ncurses-devel" may be handy, prolly a few more)
You don't need all the mysql stuff, just mysqldump,
..but I compiled everything (./configure {google for opts} and make)

MySqldump is in client/ and all the guts is in mysqldump.c
The hack is utterly trivial;

at the end of "connect_to_db" before teh DBUG_RETURN I added


#ifdef MUNKY_HACK
my_snprintf(buff, sizeof(buff), "SET net_write_timeout=3600");
if (mysql_query_with_error_report(mysql, 0, buff))
DBUG_RETURN(1);
#endif


..which is most important and tells the src server to use longer timeouts just for the dumping.

IF you're being fancy and piping from one db to another, you may want the output dump to include a prefix telling the target db to use longer timeouts.
I addeed this to near the top of 'dump_table' as ;

fprintf(md_result_file, "/* Munky Hack*/\n set net_read_timeout=3600;\n");


It's that easy (it seems) to make this stuff work properly reliably

Monday, January 23, 2012

Tedious issues escaping/quoting strings for MySQL? Use hex! It's awesome!

Hiya,
Ok sure yes you 'escape' all your strings for SQL (i.e. replace `'` with \' and so on for other nonprintable).
Sometimes this is unsuitable.

Did you know you can avoid all that palarva and just pass your strings as hex?

Check it out!

For a varchar (or blob, or whatev) column, just try instead of

insert col='blah' into mytable
=
insert col=0x626c6168 into mytable

How awesome is that? Who knew?

Also, btw, you can get mysqldump to also output blobs as hex, which makes for easier parsing - see the help output

Wednesday, January 18, 2012

See wikipedia during january 18th blackout

Yes, yes wikipedia, we know you're all radical and cool.
Anyway, just append

?banner=none

to the end of your URL, for example;


kthxbye

Friday, January 6, 2012

Obscure packetization bug in Verizon cellular HTTP proxy 'Harmony'! Does your app fail on VZW cell but work on wifi?

Wow this is a fucking bug and a half;

(On iPhone, but applies to everything)
Verizon currently appears to proxy any HTTP request over any port regardless of whether you asked for it. It adds

"X-Via: Harmony proxy"

to show how much it 'helped'. Thanks, if I wanted a proxy I'd ask for one.

It does it even on non-port 80!! FFS!
It appears that anything that says HTTP/1.1 after opening a socket on any port is fair game for fuckwittery.

And, wonderfully, Harmony Proxy has a packet reassembly bug!

The client app was (inadvertently) writing the HTTP request split into two TCP packets;
The first packet was only 22 bytes long, containing the URL
The second packet was 500 or so bytes and was the rest of the header continuing from " HTTP/1.1\r\n...etc"

Hence the very first line of the HTTP req was split into two TCP packets. This is obviously rather unusual.
The proxy freaks out at this, throws away all the HTTP headers, and just generally screws the request up (although it does send it)..

So, when using socket .send() , make sure you've buffered at least the first few lines of your headers in one go.

if your stuff works over wifi, other cell networks, but not VZW, and you're using HTTP...
watch out for this. It's a bug in their proxy.