« Posts under Uncategorized

shell_exec silent error

This is likely a common error yet might be hard to detect. If you run shell_exec(), make sure to use the full path to the binary file. It might be obvious but many for convenience does not do so, including myself at times. You forget that .bashrc does these for you while running PHP script via crontab is entirely different.

For example, shell_exec(‘ifconfig’) returned empty (it happened at times, on different servers, intermittently, not always, that made thing even harder to understand). However, the path was the issue since the logic only went through in certain cases and it worked 2/4 servers. Weird! And if you run the script directly, it gets your path. Running it via cron does not have that luxury so you need the full path: shell_exec(‘/sbin/ifconfig’); Oh well, lesson learned.

Wget/cURL – Pretend to be a real browser

wget -d -S –referer=”http://…” –user-agent=”Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv: Gecko/20101203 Firefox/3.6.13″ –header=”Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8″ –header=”Accept-Language: en-US,id-ID;q=0.8,id;q=0.6,en;q=0.4″ –header=”Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3″ –header=”Keep-Alive: 300″ –header=”Connection: keep-alive” –load-cookies cookie.txt –save-cookies cookie.txt –keep-session-cookies “http://…”

curl -v -L –referer “http://…” –user-agent “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.68 Safari/534.24” –header “Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5” –header “Accept-Language: en-US,id-ID;q=0.8,id;q=0.6,en;q=0.4” –header “Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3” –header “Keep-Alive: 300” –header “Connection: keep-alive” “http://….”

Firefox 4 – AJAX JavaScript Execution

Upgraded to Firefox 4, it’s alright, may feel a bit faster (advertised placebo?). It looks a lot like Safari. I guess there is a convergence of best practices going on.

Anyway, here is the point of this post:

– Firefox 3: innerHTML will execute any JavaScript code inside it (normally loading via AJAX). Other browsers do not.
– Firefox 4: innerHTML will not execute JS code. Same as other browsers

Solution for those who needs this functionality: have to reorganize the code so that it calls back later, or manually extract and create nodes. But I guess it’s more consistent between browsers now. Enhanced security I guess

Man, these bugs are the worst, you just don’t know/remember what changes (your code, your external libraries, something else). Wasted couple hours just to get back to my last conclusion.

Investing lessons from a market sell-off

As a beginner in stock investing, I’ve learned some basic lessons, hopefully not too painful to shun stocks all together. They sound very simple but are not easy for amateur investors to avoid. So you would need to be constantly reminded. In your portfolio, there will be stocks that are going up/down since you bought them. Stocks going down can signal a bad sign, or simply a longer wait before they shine. Always think what-if scenarios since they could happen, no matter how unlikely. When doing that, you’ll have a better chance to fight against your emotions.

– Gains are good, until they’re no longer there
– Timing for a top is very difficult. I sold BA and MCD before they went down for a correction and I thought it was simple. Wrong! And now I’ve missed several opportunities to sell
– Set a trailing-stop-limit to sell stocks that are still flying high, even if you like it, but might turn around and go down. This locks your gain in case the correction is bigger than just a blip
– Opportunities always exist, do not think you have missed the boat, the next one is coming sooner than you expect.
– If you really like the stock, or waiting for dividend: You can sell a portion of your position, not all at once


  • Money is limited, cash is king, think about tomorrow
  • Timing the bottom is very difficult
  • Averaging down on a good stock is good, until you’re out of money
  • Monitor the fundamentals and market signs for negative changes that could make it a bad stock
  • Looking at intraday charts with the technicals, you’ll get quite excited! You can finally buy the stock! Yay! But it might not be the best price

Demand from a Natural Disaster

With the strong 8.9 quake just happened in Japan and living in California, I would have to think really hard about what happens before, during and after a natural disaster, more specific, a quake. Let’s imagine what people and businesses would have to deal with and who can benefit from the increased demand. I’ll be direct and might sound too materialistic but the fact is that companies are there to fill demand and that’s the reason for their existence.


  • Awareness: the public needs to know what could happen here. Companies: printing services, graphic designers, advertising firms, public service ad brokers
  • Education: teach people what to do, how to prepare for a natural disaster specific to their cities.
  • Planning: prepare a plan to communicate with families, communities. Companies: communication companies, SMS, cellphone operators, food/water suppliers


  • Communication: report damage, shut down equipment. Need: electricity, telephone, cellphones, radio, TV, Internet. Companies: power generator, batteries, cellphone maker/operator, radio, media/news broadcasters. Companies: ATT, VZ, local utilities (ED, PGE, etc.)
  • Information: report from the damaged areas to central control to prepare aids. Need: similar to above


  • Food and basic necessities: water, can foods, light, tents, blankets. Companies: PG, KMB, UL, KFT
  • Cleaning: remove debris, tear down houses, clean roads, remove fallen tree branches
  • Waste Management: waste from the cleanup ops need to go somewhere and it’s gonna be huge. Recycling will be also significant since a lot of items (cars, houses) will be only partially damaged. Scavengers, recyclers will have to work hard.
  • Burial/Funeral: sad but there will be casualties in such strong quakes. Companies: coffin makers, funeral services
  • Rebuilding: home builders, architects, structural engineers, urban planners, inspectors, construction workers
  • Repairs: replace broken windows, broken plumbing pipes, fix light poles, fix holes in roof, repaint house
  • Insurance: bad for the insurance companies with many claims. They will need to hire temp people to handle incoming claims, documents.
  • Labor: increase demand for temp workers to cleanup, file claims, rebuild homes/buildings
  • Medical Supplies: needles, bloods, consumables within hospitals. Companies: BDX, MDT
  • Medicine: painkillers, antibiotics,
  • Hospitals: likely local hospitals but patients might have to move to a more special hospital with their injuries/conditions
  • Transportation: helicopters, airlifts, airlines to bring aids, fuel/gas, people going to/from shelters, people go to find their loved ones
  • Building Materials: rebuilding needs wood, nails, tools, paint, etc. Companies: Home Depot, Lowes, SHW
  • Appliances: broken appliances will needed to be replaced. Companies: WHR

Recommended software

Some software I cannot live without:
– Keepass (Mac/PC)
– TrueCrypt (Mac/PC)
– Terminal (Mac), Putty (PC)
– ExFAT (for read/write an external HDD for both PC+Mac

Security camera

Here is a wonderful combination: an IP camera (SharX, LTS, YCam), FTP server (VSFTPD), online image browser (http://minishowcase.net/). You don’t have to install fancy tools, just go online, browse the archive images when you receive a motion detection email alert from the camera. Gotta love technology

Bandwidth cost for EC2/cloud computing

Cloud computing often advertises on the per hour instance cost (like 10 cents per hour). Cheap right? Not necessarily. There are many additional charges (bandwidth, IO, etc. etc.) that can become much much more expensive than the cost of running the instance. Any wise company that want to invest their time in any cloud must crunch the numbers first. SoftLayer at 10 cent/GB is the most reasonable rate so far for quality. Dedicated hosting companies will continue to have their share because they can oversell (to a degree, some aggressively, some conservatively) when pooling together many clients, some uses only 10% of allocated bandwidth, some use 100%. On clouds, it’s on demand. Thus, the best choice is a hybrid approach, use clouds for burst, surge and maintain the core infrastructure on dedicated servers where deals can be found (i.e. good hosting companies that oversell a little and still provide a good quality service). And of course, every medium/large online system should be designed with outage preventions (high availability, redundancy, no single point of failure). This is why many dedicated providers (SoftLayer, ThePlanet, LayeredTech, RackSpace, ServerBeach, etc.) offer their own cloud. Choices are good and the pie is getting bigger.

Auto Scaling

The idea is really cool and cost efficient. However, actual implementation is not easy as it should be. There are vendors trying to bridge the gap and I believe it will be much easier in the future.

Problem at 2AM

For many services, usage fluctuates during any day (and also week). For example, we see our own pattern, from 2AM to 8AM (PST), it bottoms out. Servers sit idle, which waste money and electricity. The solution is to scale down during this period. The objective is to maintain a core capacity and add/terminate servers on demand. That’s the marketing hype cloud computing is supposed to deliver, but I guess a few companies take full advantage of this because the level of automation is still very low.

Problem with existing data

Say a cluster with 10 servers, at 2AM, you only need 5 servers, what do you do with the rest? It’s easy to think to simply shut then down. Not so fast! What about the data in those servers? If your app simply serves static/dynamic pages and do central logging (scaling problem of its own) elsewhere then this is possible. But if your application generates data and need to process it in some way, you have to deal with this data before termination. These are a few possible solutions. Please feel free to add your comments/suggestions and I’m sure there are better ways.

Decouple data storage and application layer

This is a good practice to isolate different layers. However, this comes with performance trade-off. If your app writes a lot (logging) into a central storage/database, many app servers can overload the master DB with many writes per second and then DB needs to scale out, making the problem more complicated and relying on a central storage can be a single point of failure.

Process before destroy

It depends how fast the data processing can take place, if the server needs 4 hours to process then the off-peak hours already past.

Move data to another peer before destroy

Peer helps other peers. The dying instance send all its data to another instance and then dies (hey, just like people). The problem here is dealing with the merging of data (eg: auto-increment). I think this is the best way for our particular situation (many many small writes per seconds) as any single instance only has a small portion (vs. central database) and it still follows KISS (keep it simple stupid).

Any thought on improvements or other alternatives?

MOD files import into iMovies for Panasonic SDR-H18

This little trick is for iMovies to import MOD files automatically. Create this folder and copy MOD files to it: D:\MP_ROOT\101PNV01