Random thoughts of a warped mind…

September 17, 2014

Reconfig for service discovery

Filed under: Amazon EC2,Chef,Git,Linux,Redhat OpenShift,Ruby,Virtualization — Srinivas @ 09:49

Imagine a constantly changing fleet of servers… New servers (or virtual machines) going online as capacity is needed or being taken offline as load drops… Maybe you already use Chef/Puppet to bootstrap your servers and use their node attributes to populate N servers in a load balancer (Haproxy for example) configuration. 

Now adding/removing a new server (a backend, maybe a rails/tomcat server, whatever) would mean that chef-client has to run on all the haproxy boxes so they know about the new backend (or the one that went away). This would work if you  run chef-client every 5 minutes or so? But why? Chef/Puppet are primarily meant to bootstrap your servers and not to sync state. Enter Reconfig and service discovery.


August 4, 2014

PubKey for SSH public key setup

Filed under: Amazon EC2,Chef,Linux,Redhat OpenShift,Virtualization — Srinivas @ 13:13

Built and started using PubKey for managing user SSH public keys (add, update and revoke access) on my personal EC2 and Google compute fleet… Try it out – https://www.pubkey.in/console/ . Docs available on http://docs.pubkey.in and for you lazy sysads, there is a Chef cookbook available too from https://github.com/onepowerltd/pkagent_cookbook :-)



February 12, 2014

Sync S3 buckets in parallel mode via concurrent threads

Filed under: All,Amazon EC2,EC2,Git,Ruby,S3 — Srinivas @ 18:13

A week back I realized one of my core S3 buckets at work (which we use for all a bunch of app uploads that are always needed) was a us-west-2 only bucket and not US-Standard. (Dont like that, When S3 gives you 11 9s why not get a US Standard bucket???). Considering that we had varnish in multiple regions with this bucket as the backend, I wanted to do two things -

1. Migrate all data from this bucket to a US-Standard bucket

2. Migrate all data from this bucket to a EU/Ireland bucket as well (coz I have app servers etc out there as well which need the same data – Did’nt want to come across the pond for every object we had to retrieve). Why? Reduced latency and reduced B/W costs (costs nothing when a EC2 instance in EU has to pull an object from a EU bucket).


August 13, 2013

Dump http requests in-transit with tcpdump

Filed under: All,Amazon EC2,Linux,Virtualization — Srinivas @ 14:57

Note to self -

tcpdump -A -s 0 ‘tcp port 80 and (((ip[2:2] – ((ip[0]&0xf)<<2)) – ((tcp[12]&0xf0)>>2)) != 0)’

Handy on haproxy/varnish boxes to see requests/responses in realtime for debugging on-the-fly… As opposed to having to dump to trace file and analyzing off-server with wireshark or similar…

August 2, 2013

Cloudfront woes – “Your request contains one or more invalid invalidation paths.” – Use custom regexp for URI::encode

Filed under: All,Amazon EC2,EC2,Linux,Ruby — Srinivas @ 12:27

AWS Cloudfront is a content delivery network part of Amazons EC2/AWS stack which lets you serve static assets from a source (S3 bucket or custom origin server) by caching it across numerous edge locations. Occassionally the underlying content can change which needs the cache to be refreshed – This is done via a Cloudfront cache invalidation request which specifies a distribution id and a list of paths to refresh (e.g. /index.html or /imgs/logo.png etc).


Older Posts »

Powered by WordPress