Author - Ashley Schroder

Hi! I'm Ashley Schroder, the head software engineer at World Wide Access, a company that exports New Zealand products to the world.

I'm an active member of the Magento community, contributing open source extensions to Magento Connect and sharing my Magento experiences through my blog, aschroder.com. You can find helpful tips and advice there on a wide range of Magento development issues.

More Info » Follow me on Twitter »

Reader Comments (54)

  1. Branko Ajzele
    April 18, 2011 at 6:59 pm /

    Great, insightful, article. Appreciate it.

    Reply
    1. brandon
      brandon
      April 2, 2014 at 11:35 am /

      excellent article. would it suffice to say that using database session-handling could become a bottleneck as more concurrent users are added UNLESS you add a replicated database server with a load balancer to handle these concurrent users?

      I am trying to decide whether to use file-based (since my front-end web server uses ssd’s) or whether to use database so I can easily scale the DB server later..

      It would make an interesting test to determine what number of concurrent users would warrant the addition of a replicated and load balanced database…please update this if you decide to do that ;)

      Reply
      1. Sonassi
        April 2, 2014 at 3:52 pm /

        Using the DB server for session storage is a bottleneck from the onset. Its limited support for locking and lack of pruning/expiry makes it the worst possible option given others available.

        If you have a single web-server, then file-based session storage is more than sufficient. Consider Redis when you have multiple web servers (at which point, I would hope you have exhausted the maximum possible vertical scaling available).

        Using the DB for session storage should never even be a consideration.

        Reply
  2. Links 13/2011 bis 15/2011: Magento, Magento und noch einmal Magento | Matthias Zeis

    [...] inzwischen sind Videos von Magento-Imagine-Sessions online.Ashley Schroder: welchen Speicher soll man für Sessions wählen und warum?H&O: eine sehr sehr schicke Anpassung des Magento-Bezahlvorgangs – nur [...]

  3. Magento Session Storage: Which to Choose and Why? – Tutorials – Magebase | Magento Training Course

    [...] These are: file-storage (the default), database, memcached and tmpfs-backed file-storage. Link – Trackbacks source: Topsy – magento tutorial – Magento Session Storage: Which to [...]

  4. On working from Pukekohe | ASchroder.com

    [...] a month since I last wrote anything, but I dropped an article on Magebase earlier this month about Magento Session storage, have you read [...]

  5. Nick
    May 22, 2011 at 9:07 am /

    This was a really awesome read. I am installing Magento right now and debated about the session storage. I set it to database. :)

    Reply
  6. Muhammad Ali
    Muhammad Ali
    August 5, 2011 at 7:45 am /

    Hi , Is there any way that i can configure separate database connection for session management . So in this way i can store session management information in separate database and put less load on my catalog database in magento.

    Reply
  7. X-Cart Developer Blog
    November 7, 2011 at 11:54 pm /

    It was really a helpful content. Keep sharing…

    Reply
  8. Girish
    Girish
    January 7, 2012 at 12:03 am /

    Content covered is great!!!!!!!111

    Reply
  9. Sonassi
    March 3, 2012 at 7:11 am /

    Well written article – but a little on the useless side.

    You used a test tool that doesn’t really support session creation (cookie support). Apache bench will test pure throughput, but will not have any bearing on sessions. You would be better served using Apache jMeter with a proper real-world simulation (link clicks, cookie support, mutli-page viewing per session).

    Then a top of that, you will really not notice an issue until you start getting into the tens of thousands of sessions. So a small concurrency test over a short period of time will not highlight anything.

    With file based sessions, they will be auto-pruned by the PHP session clean-up cron – so the files are likely to be deleted within ~7200 seconds of creation. So even on a busy site (30k uniques per day), there usually only around 4,000 session files in ./var/session – which is nothing for a Linux server.

    With Memcache sessions, TCP/IP is the only overhead – which for a single-server deployment, would make it slower than file based. So then, you would use a unix socket instead, which removes that overhead and gives better security. But even still, your customer sessions will be truncated/limited as to the amount of RAM you can allocate. The average Magento session is 4Kb – so you’ll be able to support 256 active sessions, per MB you allocate. So be sure to set an appropriate limit to avoid customers randomly losing cart/session. And also bear in mind, a Memcache daemon restart will wipe out all existing sessions (BAD!).

    The with DB, the default prune expiration setting is a mighty 1 week, so with the above store size as an example (30k uniques per day), you’ll be looking at a DB table size for core_cache_session of around 7GB – which will grind your store to a complete halt, for almost every session based operation.

    From experience of hosting both large (230k unique visitors per day) and small (<1k unique visitors per day) stores, our recommendation is:

    Single-server deployment = files
    Multi-server deployment = memcache (using a separate TCP/IP instance from your main Magento cache)

    Reply
    1. Sonassi
      March 3, 2012 at 7:15 am /

      Oh, furthermore, no-one should ever be using tmpfs for **anything** on Magento. Be it cache storage, session storage or anything else.

      The performance difference is negligible to using disk alone (unless your hosting provider is using 10 year old 4,200 RPM HDDs). The Linux file system cache will appropriately place the correct files into the RAM cache by itself as demand increases for them.

      Reply
  10. Colin Mollenhour
    March 6, 2012 at 3:52 pm /

    It should be noted that the database session handler does not implement locking whereas both memcache and file do. So, using database it would be possible for one request to overwrite the session of another request. Probably not a huge deal since the cart is stored separately in the database, but it could cause flash messages to be lost perhaps.

    Reply
  11. Steve Holdoway
    March 6, 2012 at 5:34 pm /

    Hmm… ( Note my experience is primarily with CE ). These figures just don’t gel with my experience of administering servers running magento. You can literally *feel* the difference on a small/med linux server when all else is tuned correctly and you swap between either db and file and tmpfs support. It’s just plain much snappier. You’d need at least 25% difference to notice that.

    As a quick test, set it up for tmpfs, and then screw with the permissions on var/session to it has to write session data to /tmp instead. The overhead of a failed file open is trivial to the file creation/access itself. The overhead of creating a file is extremely heavy: 20 years ago I doubled the speed of a process where files were used as flags just by using a hard link to a pre-existing file instead – it wouldn’t happen now but it is a good example. That’s part of what you’re streamlining by using an ultrafast disk. No amount of buffering will change that. The rest, well sessions aren’t usually around long enough to efficiently use the file buffers. It’s the initial access time of a site that you’re really improving.

    The way I see it, running a website is 99% readonly stuff. If you can serve all of that from memory then you’re going to have a far more performant site than having to go to disk to deliver content. What do you need to write? Logs and purchases… what else ( ignoring the admin side for now )?? Given the current cost of memory ( I think I’ve just paid NZ$25 for 4GB ), a base build of 2GB will cover most shops, and to 4GB is little hardship.

    Without a doubt, the 3 most successful performance upgrades to a linux server supporting a Magento CE site are (in order):

    1. Use PHP in FPM mode, integrating APC both with Magento FE and PHP.
    2. Replace apache with nginx (ESPECIALLY if running in SuPHP mode )
    3. Put sessions on a tmpfs partition.

    (Without them you see the 5 second product display that’s so prevalent in poorly served Magento sites)

    There’s plenty more that’ll improve further ( I’m assuming server / infrastructure tuning as a given ), but the return is diminishing fast.

    What’s the problem with tmpfs? It’s solid as a rock, and blisteringly fast (as long as it doesn’t get swapped out). OK it doesn’t survive a reboot… unless you script for it… just add a bit of code to the web server start / stop blocks. If you have problems with your server’s reliability, then losing your sessions is the last of your problems. IIRC PHP expires them after 24 minutes or somesuch anyway by default. I know risk perception is a purely personal thing, but, based on my experience, this is really low. Way below not mirroring your disks, for example.

    Steve

    Reply
    1. Colin Mollenhour
      March 7, 2012 at 7:54 am /

      I agree with Sonassi (Ben?) that using tmpfs is a negligible performance improvement and not worth the risk of session losses or out of memory problems. If you’re getting many uniques per day, you simply can’t risk losing that many sessions since it will certainly lead to lost sales. Also if you get a sudden spike you can’t risk running out of space on the tmpfs and the bigger your site gets the more likely you are to have large spikes in traffic. The regular files session storage is good for single-server setups and I don’t really recommend changing it until you move to a cluster at which point I don’t like any of the packaged Magento options. Ben, do you have a way to dump Memcached so it can be restored in case a reboot is needed? E.g, ram upgrade.

      Reply
      1. Sonassi
        March 7, 2012 at 9:30 am /

        Yes, its Ben here :)

        tmpfs

        offers nothing, absolutely nothing over the “standard” storage mechanisms. That applies to ./var/cache and./var/sessions alike. Unless of course, you have a critically wrong issue with your I/O subsystem – SAN mounted over 10BaseT ;) It adds unnecessary complication and limits scalability. I’m not negating the speed and stability advantages it might have – but it really isn’t appropriate. Its just a last ditch attempt for people trying to speed up Magento and looking at completely the wrong bottleneck (Magento is not bound by I/O!)

        RE: Memcached survivability, we’ve got a few approaches in operation.

        Repcached – http://repcached.lab.klab.org/ (over HAProxy)
        No replication/redundancy – Power it down during the wee hours :)

        Its nice that Memcached V3 (BETA) has redundancy built in, but ZF doesn’t really handle it properly. There’s also some pretty cool methods out there for Memcached dumping (http://www.dctrwatson.com/2010/12/how-to-dump-memcache-keyvalue-pairs-fast/).

        It is really client dependant, but even our largest client (230k daily uniques) – we just use a bit of common sense monitoring and planning infrastructure changes.

        Off topic, but please do not get me started on Nginx vs Apache (vs Litespeed vs Lighttpd) – when it comes to raw PHP/Magento performance, PHP is the bottleneck, not the HTTP server.

        The *only* time when Nginx will outperform Apache (for pure benchmarking concurrency) is when Apache is using mod_php and spawing new threads is a cumbersome process. But when you are using fCGI/FPM – Apache will have the same throughput … and support on-the-fly htaccess style modifications.

        And Eaccelerator will outperform APC by about 6-10% – just make sure you exclude the appropriate classes and you have no compatibility issues.

        Back on topic, choosing the storage mechanism is an easy decision with Magento.

        Single server (unless you’ve got one of those Supermicro 8 way E7 monsters), you’re likely to have a “smaller” site anyway – and file storage will performance and scale perfectly.

        Multi server – you’ve got 3 options.

        1. DB storage – but unfortunately Varien’s implementation of this isn’t great and the core_session table will grow and grow and grow, as well as tying up your DB server, and adding TCP/IP overheads.

        2. File storage – yes, it is possible, over NFS/SAN/GlusterFS – but add in the overheads of these replication techniques, PHP FLOCK issues … it becomes impossible to use.

        3. Memcache – worldwide recognised as a distributed session/data storage mechanism. With a variety of different means of providing replication. Its nice, quick, stable and works out of the box.

        Simples.

        Reply
        1. Steve Holdoway
          March 7, 2012 at 7:15 pm /

          1. On topic.
          I would like to see evidence. I’m finding it hard to even design, let alone generate tests to properly exercise Magento session management – I think I’d need ab running concurrently from dozens of ip addresses with a number of crafted templates before it would really be a meaningful benchmark. Single pages / sources are no good. That’s why I’ve only provided anecdotal evidence ( and my disks are mirrored SAS! ).

          These days, I see it’s pretty simple to run 100+k uniques / day + 100k+ products on run of the mill servers ( and handle spikes far higher than that using a cdn to perform the gruntwork ), now that multiple 6 core/12 thread CPU / 64GB / RAID 10 1U servers are easily available. Most clustered solutions I’ve seen lately have been swayed by the vmware promise of utra high uptime / transparent upgrades. Unfortunately, at a price where a mirrored backup server would be just as reliable, and much, much cheaper!

          This is why I see this as beiing important enough a topic to keep banging on about!

          2. Off topic.
          I feel that you are at odds with a large number of companies who now use nginx to host their sites – whether just as a front end proxying to apache or native is difficult to assess (although if it’s not native, it sounds like just a CV stuffing process and little else useful). These include…

          http://www.magentocommerce.com
          http://www.linode.com
          http://www.tumblr.com
          http://stackoverflow.com (allegedly)
          http://www.stuff.co.nz (well, it’s busy for New Zealand)

          … and of course Apache themselves admit it, with the release of 2.4 ( after a delay of XP’esque proportions ), including comments like
          “We also show that as far as true performance is based – real-world performance as seen by the end-user- 2.4 is as fast, and even faster than some of the servers who may be “better” known as being “fast”, like nginx,” Jim Jagielski, President of the ASF. ( note the quotes, but it *is* public perception at least ).

          and, of course me (:

          Sure the ultimate bottleneck is not the web server, but look at the resources that Apache consumes in comparison just to serve php. When you’re talking about the vast majority of generic ecommerce sites running on (for example, my minimum specs) a 2GB/4 vCPU VPS, these resources really do count… they can be used in far better ways. Especially when most of these sites are being run through dashboards which do their level best to enforce a generic and totally unsatisfactory configuration for Magento ( but that’s a rant for another day! ). I’ve even managed to get sort of acceptable performance out of a small Amazon instance – the image is out there somewhere… not particularly well tuned though.

          I would also hesitate to recommend eAccelerator over APC, as no releases have been made in about 2 years, and their site can’t even forward the generally published http: link to the live https: one.

          Reply
          1. Sonassi
            March 7, 2012 at 10:30 pm /

            @Steve I don’t want to turn this article into a debate – so I’ll make this my last response. But I will happily continue it over email if you want.

            1. With all due respect, if you are only aware of A/B – you aren’t qualified to make these statements. Apache Bench/Siege are not remotely usefully as a true benchmarking, or real-world simulation tests. They probe a single HTTP request, with no true session support and really only highlight how good a job your OPCode cache/disk buffer is doing.

            We use Apache jMeter for testing, with client instances running on multiple physical services. With hand-build profiles to simulate the entire customer browsing and checkout proccess. We even factor in random searches, random amounts of page visits and truly replicate the various types of shoppers you would expect to see:

            Window shoppers (view many products, even add items to basket, never attempt checkout, generally hit 5-9 pages per visit)
            Sale (view many products, add a few, and complete checkout, generally hit 5 pages per visit)
            Wishlist shoppers (log in to account, add products to wishlist, generally hit 8-12 pages per visit)

            The types of pages they visit are drummed from a list of URLs, comprising of search, search+layered nav, category, product, customer account and multiple address checkout (its too hard to re-enact the OPC accordion).

            I can supply a sample test config if you like and you can get a feel for true benchmarking using jMeter. If you are at all serious about performance optimisation, ditch Siege/AB and use the right tool for the job.

            Based on experience, hosting a 100k+ unique visitor Magento store isn’t a simple task, nor can it be achieved with just run-of-the-mill set ups. I’m not sure why you mentioned a CDN, as it plays no role whatsoever in improving “Magento” performance – it only becomes valuable on high traffic sites that serve a huge amount of static media.

            2.Are you REALLY going to use MagentoCommerce.com to justify the use of Nginx – it is one of the slowest, most unreliable websites we have the misfortune to have to use.

            Linode – its not Magento
            Tumblr – its not Magento
            Stackoverflow – its not Magento
            Stuff.co.nz – its not Magento

            And this is the main reason why the majority of companies are wrong about true Magento performance – heck, Rackspace and Peer1 (the enterprise partners) recommend separate DB/Web servers to improve performance, but anyone who has ever actually properly bench and soak tested a Magento store knows that it is not bound by MySQL, most of our MySQL servers (for shared and dedicated hosting alike), do little to nothing.

            In terms of resources – you are correct. An Apache worker will consume ~30MB regardless of what modules are enabled, whereas an Nginx worker will use about 12MB. But as you said, RAM is cheap. But even on the “30k per day example”, you’ll only have about 20 threads active, whilst PHP-FPM/fCGI does the grunt work.

            I’m going to give you (and anyone else reading) a loose analogy – a (Magento) fast-food restaurant.

            The customer (at the drive-through), is the customer on your web store.
            The till operator (who sits on a chair and hands you the bag of food through the window – Web Server)
            The chef (who cooks the meals – PHP)
            The su-chef (who prepares the ingredients for the meals – server subsystem)

            The average meal takes 2 minutes to prepare, 12 minutes to cook and 5 seconds to hand to the customer.

            Now, lets take the perception of Apache (an overweight, unfit, slow window attendant), the kitchen takes 14 minutes to make the meal, hands it to him – then he passes it to you. He is only passing 1 bag, every 14 minutes, a pretty easy job.

            But, we want the business to run faster – so we’ll fire Apache and replace him with Nginx (the Ivan Drago of fast-food service).

            The kitchen takes 14 minutes to prepare a meal and Nginx hands the bag of food at lightening pace to you.

            But hold on, it has still taken 14 minutes for the customer to get their meal – even though we’ve got the fastest delivery staff available.

            That’s because it was never the bloke handing you the food that was the bottleneck; in-fact – the “slow” person added value by speaking the native language of kitchen staff and being able to receive updates (.htaccess support). Whereas Ivan speaks russian and it requires a translator to stop him working and tell him new updates (edit Nginx config, reload etc. yada yada yada).

            Nginx won’t improve performance over Apache for Magento, we have proved in time and time again in bench testing.

            Nginx/Apache are the front men – and in the case of Magento hosting are no more than marketing tools. It is easy to show how fast Nginx/Lighttpd/Litespeed are over Apache in terms of reqs/s. On a given server, Apache is good for around 12k requests per second, Nginx can surpass this with relative ease – hitting around 16k requests per second – but this is for **static content**. When you start to hit 12 THOUSAND **PHP** requests per second – then its time to consider dumping Apache, but until then, it will perform par-for-par with anything else.

            If you’re building a CDN, Nginx is a great choice
            If you want a static file proxy, Nginx is a great choice (heck, we use it ourselves)

            If you want a flexible, high performance web server for Magento, Apache is just as good a choice – as at the end of the day, PHP (the chef) is slowing you down, not the web server – never the webserver.

            I do wish people would let go of the traditional means to trying to speed up web applications – they just do not apply to Magento.

            We’re a small company, but have unquestionable experience in this field. We test server configurations and different techniques daily – we speak from real experience, we eat, sleep and breathe Magento.

            Reply
          2. Sonassi
            March 7, 2012 at 10:38 pm /

            I would also hesitate to recommend eAccelerator over APC, as no releases have been made in about 2 years, and their site can’t even forward the generally published http: link to the live https: one.

            What bearing does recency/age or dead links have for performance? The fastest airplane ever made is the SR-71 Blackbird – that is 50 years old. Does that mean anything newer than that is faster?

            Just perform your own testing and you’ll soon see that Eaccelerator 0.9.6 will beat APC every single time.

            Even for raw Magento performance, PHP 5.2.17 will beat PHP 5.3.7.

            Reply
            1. Steve Holdoway
              March 7, 2012 at 11:12 pm /

              Ben, your analogy is perfect.

              Until it was taken out of service, the SR-71 was maintained and upgraded regardless of cost ( and still leaked fuel on the ground ). Seeing as PHP is a moving target ( and 5.2 was EOL’d Christmas 2010 ), subject to security breaches with saddening regularity, then the tools that use it require the same level of diligence.

              For me, using a product in that state is a far greater risk than using tmpfs.

              BTW the SR-71 was the fastest jet-powered aircraft. Whether you consider the X-15 to be an aeroplane is another debate altogether (:

  12. ashley
    ashley
    March 6, 2012 at 5:43 pm /

    Great discussion guys, thanks for contributing! Let’s hope the Sonassi team can share some of their thoughts on the tmpfs.

    Reply
  13. Steve Holdoway
    March 7, 2012 at 9:56 am /

    @Colin – why do you think that memcached is any safer than tmpfs? It has a max memory size just the same as tmpfs, and will lose stuff when full / server fails – and both need configuring / monitoring / constantly managing for best results. Also, I haven’t seen a config for mage anywhere that uses named sockets to access it, so you’ve added on the overhead of a TCP stack.

    And, of course dumping tmpfs is a trivial task. If you’re really paranoid, then saving it every minute would be no big deal. The only way I know of to keep memcache data is to replicate it, for which you’ll need that TCP stack again.

    As I said before, I condsider tmpfs to be an extremely low risk, please explain why you find memcached to be a lesser one?

    Reply
    1. Colin Mollenhour
      March 8, 2012 at 5:47 pm /

      Steve, I only suggested memcached over plain filesystem if you are using a cluster and only then because the other cluster-friendly options suck. For single servers I stick with the filesystem. Of the out-of-the-box options for a cluster memcached is the best one (since it is the only one that fully supports locking and doesn’t abuse your database). I really don’t like memcached much either, which is why I’m working on a MongoDb-based handler that plugs into Magento and uses opportunistic locking.

      Overall, I don’t think reading and writing one session file per request is going to have much impact on perceivable performance *until* whatever system you are using is pushed past it’s practical limits. For the filesystem the practical limits are very stable long after the filesystem’s cache is full. For tmpfs the sessions will start being copied into swap at which point tmpfs will perform much worse than the filesystem. For memcached you start losing LRU sessions but performance will not degrade. For the MySQL database, I’m guessing everything would just crash and burn. Which of those options sounds the best to you?

      Reply
  14. Ryan
    Ryan
    May 18, 2012 at 5:40 pm /

    Do you know how to configure the xml file to store session in file but at the same time store cache in memcache? It seems that it is all or nothing for each of the storage option.

    Reply
  15. Steve Holdoway
    May 18, 2012 at 6:05 pm /

    @Ryan, you set the session data to be stored in file at ( both snippets appear in the global block )

      <session_save><![CDATA[files]]></session_save> 
    

    I usually use a 2 level cache – basic starting config:

            <cache>
    	  <backend>Apc</backend>
    	  <prefix>magento</prefix>
    
    	  <slow_backend>Memcached</slow_backend>
    	  <fast_backend>Apc</fast_backend>
    
    	  <slow_backend_options>
    	    <servers><!-- The code supports using more than 1 server but it seems to hurt performance -->
    	      <server>
    	        <host><![CDATA[127.0.0.1]]></host>
    	        <port><![CDATA[11211]]></port>
    	        <persistent><![CDATA[1]]></persistent>
    	      </server>
    	    </servers>
    	    <compression><![CDATA[]]></compression>
    	    <cache_dir><![CDATA[]]></cache_dir>
    	    <hashed_directory_level><![CDATA[]]></hashed_directory_level>
    	    <hashed_directory_umask><![CDATA[]]></hashed_directory_umask>
    	    <file_name_prefix><![CDATA[]]></file_name_prefix>
    	  </slow_backend_options>
    	  
    	  <memcached>
    	    <servers>
    	      <server>
    	        <host><![CDATA[127.0.0.1]]></host>
    	        <port><![CDATA[11211]]></port>
    	        <persistent><![CDATA[1]]></persistent>
    	      </server>
    	    </servers>
    	    <compression><![CDATA[]]></compression>
    	    <cache_dir><![CDATA[]]></cache_dir>
    	    <hashed_directory_level><![CDATA[]]></hashed_directory_level>
    	    <hashed_directory_umask><![CDATA[]]></hashed_directory_umask>
    	    <file_name_prefix><![CDATA[]]></file_name_prefix>
    	  </memcached>
            </cache>
    

    Even though I see a fairly ordinary site storing 80MB of data in memcache, but only seems to deliver back at < 10kB/s – so the actual difference between that and straight single level APC cache may be minimal, for a fair bit of effort and extra administration.

    I still stand by my assertion – irrespective of others belief in the generic file system cache! – that tmpfs makes a fair difference… in fact in back-to-back tests of a well used ( in developmental terms! ) 1.4.1 site on dedicated hardware ( loads of spare mem, cpu power ), that it improved a random product page load by over 20%.

    Reply
    1. Colin Mollenhour
      May 20, 2012 at 6:13 pm /

      Steve, I benchmarked files vs tmpfs a while back and here is what I found:

      Debian 6.0, PHP 5.3, RAID 1 250Gb SATA enterprise class drives, Dual E5620, 12Gb RAM
      Read performance: Nearly identical (due to fs cache no doubt)
      Write performance: Decent improvement (but was already quite fast)
      Tag clean performance: Minor improvement (still every bit as horrendous for all practical purposes)

      I don’t remember the exact numbers but I pretty quickly determined the pros did not outweigh the cons, especially for sessions. I don’t know what testing method you used to arrive at 20% page load improvement, but I’d venture to say if you run a sound benchmark you will have similar results to mine. With regard to sessions, we are talking about one file read and one file write per page load so for that to make *any* perceptible difference you’d have to have some painfully slow disks..

      Benchmark tool: https://github.com/colinmollenhour/magento-cache-benchmark

      If you are interested, here are the slides from my presentation on Zend_Cache backends at Imagine 2012: http://info.magento.com/rs/magentocommerce/images/imagine2012-tech-cache-showdown.pdf

      Reply
  16. Steve Holdoway
    May 21, 2012 at 10:22 am /

    @Colin. I see your paper is primarily designed to forward the use of redis as a caching mechanism. Whilst this quite possibly a really good idea ( I haven’t checked ), the findings are not necessarily relevant, and the amount of other, configurable software that is in the way means I’d have to spend a lot of time in analysing your results to have a valid opinion. Additionally, I see no display on the amount of work the server itself is doing, and what resources are in use – probably *the* most important factors in choosing your production stack.

    My testing – as I said – is ‘real world’, I used a web performance testing service local to the server ( well, Sydney to Christchurch is a latency of c.40ms but that’s pretty real world for the Antipodes too ), and used it to time the delivery of a random page. Hardware is a Dell R-310, quad core, 16GB, Raid1 SAS disks. Network connectivity to the internet is 100mbit. Software is Squeeze 6.0.4, Percona DB, PHP 5.3 in fpm mode, nginx 1.2.0. All resources have been carefully tuned.

    The website I used was a pretty standard 1.4.2 based Mage site, which had undergone a fair bit of development in it’s lifetime ( I think it’s having another spurt right now! ), with handful of common plugins. A typical, rather than expertly designed and tuned websystem.

    Why do I take this approach? Well, because in my experience ( I wrote my first test harness for a British Telecom 25 project years ago ), a lab approach to testing will only go so far. One thing that is rarely accounted for is the environment the finished product runs in – eg ‘slow’ communications between client and server, and how/when the browser will accept information from the server.

    Because of this, the ‘best’ solution for the visitor to a website may not be the best result in the lab.

    Back to back tests with and without var/cache and var/session running on tmpfs filesystems with a completely empty cache.

    Before: 7.3s
    After: 5.8s

    Subsequent access with a full cache

    Before: 2.9s
    After: 2.5s

    ( I have URLs, but have not requested permission to publish from the owner )

    Unfortunately the system had to go live rather faster than expected, so I didn’t have the chance to investigate further. I think that the real gains are in the session data, as the admin backend is appreciably speeded up as well – and that uses it a lot more.

    I would recommend that a real world approach be used in conjunction with lab testing, and that a close eye be kept on the system resources whilst the test is in progress – you never perfectly tune a server: you’re just moving the bottleneck around (:

    Steve

    Reply
  17. sonassi
    May 21, 2012 at 8:09 pm /

    @Steve your testing is flawed, you need to use a clean demo store, your store could be reading/writing megabytes to sessions for all we know – which would explain the difference.

    We host over 80 fairly large Magento stores, between 3k to 200k daily unique visitors – we spend exhaustive amounts of time testing and bench marking and I can tell with (with extreme confidence) the tmpfs myth is just that, a myth.

    Reply
  18. Steve Holdoway
    May 21, 2012 at 8:31 pm /

    @Sonassi. No it’s not. You’re wrong. Not only were the results repeatable, I was closely monitoring the server as well. Like I said, I’ve done this kind of thing once or twice before… we’re talking 20+% here. If you find no difference, then you probably need to look more closely at your server tuning – there’s a different bottleneck masking your results.

    Where’s your demo site, and what are the specs? Let’s see how fast it is!

    Reply
  19. sonassi
    May 21, 2012 at 9:44 pm /

    @steve http://demo.sonassi.com – its sat on a server with 10 other live stores in a shared hosting environment of 2.4GHz QC / 12GB RAM / 2x SATA HDD

    I/O isn’t a bottleneck for *any aspect* of Magento – hence why we can use SATA disks without issue, even in a shared environment.

    Beyond that, the Linux disk buffer/cache takes care of caching frequently accessed files – making session read times negligible. Session writes pose no issue whatsoever, for a normal store – however, if you’ve got some badly written extensions writing more to sessions than they should be doing, it would explain why you saw a performance benefit.

    If you are having to resort to ramdisks to get better performance – then you have a fairly critical I/O bottleneck. It sounds like you’ve got a partition alignment issue/inappropriate raid stripe size/badly tuned OS; run iostat and see for yourself. If the blocks read/written differ between the disk itself, partition, device mapper and filesystem – you’ve got I/O issues.

    I would strongly suggest getting in touch with us, we can offer some consultative services to set up your server properly.

    Reply
    1. Steve Holdoway
      May 21, 2012 at 10:40 pm /

      @Sonassi, given that you’re demoing a 1.4.0.0 site, comparing to my demo site @ magento.greengecko.co.nz ( which is set up for ultra security, certainly not performance ) shows the only real difference is probably in DNS performance – always a problem here in the Antipodes.

      Nope, no alignment issues, raid stripe has been tuned, very well tuned OS thankyou. IO is trivial throughout testing, as you’d expect. I’m running stats all the time during the first and last testing runs – none in the intermediate ones.

      To expect the GP disk cache to work as well as a dedicated shared memory segment is not quite right. tmpfs backed systems are locked in memory, which the cache contents are not – for example an image on a tmpfs filesystem will be available 10 minutes after it’s last access, whereas it’ll be long gone from cache, for example. Tuning the kernel vm parameters ( amongst others ) will make some difference, but will never, ever be as good – especially for the regular session upates that Zend makes, and will be far, far more wasteful on your available resources.

      I really can’t see any point in contacting you… until you make anything other than unfounded generalised assertions (:

      Reply
  20. sonassi
    May 21, 2012 at 11:01 pm /

    @Steve – I think we’ll just agree to disagree then.

    If you have to resort to using a ramdisk to achieve better performance, it is an I/O issue.

    Either a) you are writing too much data to individual session files, causing a bottleneck or b) you have an I/O issue or c) your test tool is giving false results.

    I would love to see what you are using for testing to show such a significant difference. We use jMeter and it doesn’t get more realistic than that – and session storage pure and simply is not an issue – the CPU caps out for PHP processing before I/O bottlenecks *anything*.

    On a quad core server, you’ll see an average of about 12 requests per second (across the board of search/category view/product view/checkout), and 12 session read/writes per second is negligible.

    I truly cannot understand where you are getting your numbers from.

    Reply
  21. Steve Holdoway
    May 24, 2012 at 7:20 pm /

    @Sonassi

    I’ve been trying to work out how you can accurately test a web app in real life with jMeter. There seem to be 2 major problems if I remember correctly ( well, I checked the first one (: ).

    1. To quote: “JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does”. Given that there is often in excess of 1MB of JS loaded into a Magento page, not actioning it can somewhat skew the result, and thereby lessens it’s relevance to the end users perception.

    2. last time I looked, It didn’t support distributed testing… well, it could handle a few clients on the same subnet.

    It is a year or more since I’ve looked closely at it – my native distrust of anything Java clouds my vision – so things may have changed since then.

    Personally, my serious testing uses handcrafted PhantomJS scripts, which are run concurrently from about a dozen servers, distributed worldwide. I then add a pretend load to the server locally using ab and monitor changes at both the client and server end.

    My results won’t show the theoretical maximum the server can provide, but I feel that it does give me a better idea of what the customers are going to get – which I consider to be far more important.

    I have a feeling that your understanding of how the buffer cache and tmpfs file systems work / interact is flawed, which may well be at the core of our disagreement.

    Reply
  22. sonassi
    May 24, 2012 at 9:18 pm /

    @Steve.

    JS rendering is client side, and has *nothing* to do with *any* server-side tuning/optimisation/changes. It is irrelevant.

    It does support distributed testing. Google jMeter remote / jMeter server – distributed/remote testing is the *only* way to generate significant load to adequately stress a web cluster.

    I cannot continue this discussion any longer; I’ll stop feeding the troll.

    Reply
  23. Steve Holdoway
    May 25, 2012 at 9:08 am /

    @Sonassi.

    So your testing skips all load generated by AJAX calls, and that’s ok? And you have the resource to generate a hundred or more concurrent, geographically disparate clients to load the server up?

    I had hoped that you’d be up for swapping ideas that readers could try to improve performance of their websites, and have happily offered my findings. In response, your heavy handed statements with no backing – often factually flawed – just frustrate.

    I’ll let the reader decide who the troll is.

    Apologies Ashely for rather hijacking this thread. I’m off now.

    Reply
  24. sonassi
    May 25, 2012 at 9:26 pm /

    Steve, I already sent you a private email to discuss – why do you continue to debate on here? I am astonished that you are intentionally trying to discredit Sonassi – and as much as I don’t want to rise to this, your non-professional attitude and responses demand it.

    To answer your questions…

    No, we don’t perform worldwide testing – ever.

    That would be completely and utterly pointless. The massive latency shift would render any results worthless – and no server-level changes would ever change response time of cross continent interconnects. Anything beyond our edge routers is outside of our control – and as a result, irrelevant to testing a server-level configuration change.

    If you are trying to benchmark a specific server configuration – the **only** place you should be testing, is locally, or within a low-latency, high speed LAN (1Gbit+).

    Magento very little, nigh on zero Ajax calls in the majority of the front-end, with the exclusion of the checkout – and in that instance, our test posts individually to each controller within the checkout phase. It is easy to emulate Ajax with HTTP requests.

    My backing and worth, is working for a successful specialist Magento hosting and development agency for 4 years; with a proven track record with hundreds of happy customers and dozens of own own high performance Magento servers.

    What would you like me to do exactly? Its your responsibility to prove that tmpfs makes a difference – as no other poster on here, Colin M or I, can see that it does; not in our tests at least. Hence why I asked you testing method.

    And for completeness, to quote the email for other readers.

    Hi Steve,

    I thought I’d email you as we’re going to come across petty and childish if our dispute continues on that comment thread.

    When testing servers, we use jMeter **because** it doesn’t render like a browser, we are trying to test server load – not measure template render time. It is the only application that can generate enough “proper” traffic to load test our 10+ server clusters, and even then it requires a few powerful servers just to run the testing.

    I can send you over a sample profile if you want and you can try it for yourself.

    Regarding tmpfs, we own over 50 (pretty powerful) servers and host over 100 Magento stores. We test new hardware and configurations continuously and one thing we don’t use on any production is tmpfs. We wrote a guide about 2.5 years ago on what worked and what didn’t in terms of improving performance and that is still accurate today.

    You have misunderstood me, I do not think the disk buffer is a replacement for tmpfs, my point was that it isn’t necessary to load files directly into locked memory for fast read access times – the OS will do that itself indirectly via the disk buffer, I digress.

    I’m trying not to belittle your Linux system administration experience; but I out-and-out cannot understand how you could ever see a performance improvement using tmpfs for session data, the read/write is so infrequent (on a stock Magento store) that it would make no difference whatsoever. Which is why I questioned if your template is writing more to the session than it should be – it wouldn’t be the first time I’ve seen it, and would explain your performance difference.

    But in the real-world, I/O plainly is not an issue for Magento and during load testing, you would see that. Most time is spent in CPU for PHP, not bound/waiting on I/O – be it session or core file access. It is another reason why compiler doesn’t really offer that much in terms of a performance boost – unless you are using NFS/SAN.

    NFS has funky support for F_LOCK, which is why session/cache access can be brutally slow – but it is unlikely you are using NFS, as it would be strange to use a network share for file access, but use a local session storage method. Another reason why I cannot understand your performance gains.

    I wasn’t joking when I said I could have a look into your issue for you. tmpfs shouldn’t improve performance on a ‘typical’ Magento store – which would indicate you have a template/code issue (or fatal issues with I/O).

    Reply
  25. Stuart Macfarlane
    July 22, 2012 at 10:47 pm /

    @Sonassi,

    I have to disagree with you on Nginx, Performance testing done by Peer1 and Magento has proven that Nginx provides better long term performance for larger busy websites by reducing the cost of resources required per 1000 visitors versus Apache.

    Additionally I cannot see anything wrong with someone using tmpfs on a single server, Magento recommends Memcache for clustered enviroments to store sessions and cache and that is nothing more then a glorified tmpfs…

    Reply
  26. Tegan Snyder
    November 30, 2012 at 5:31 pm /

    This is both one of the most interesting discussions on Magento performance I have had the pleasure to read in some time. Next round on me guys!

    Reply
  27. Hostmethod » 101 ways to speed up your Magento e-commerce website

    [...] Use the correct session storage, choose file system or database (during setup). Most installations should use “file system” because it’s faster and doesn’t cause the database to grow. But if your site will run on multiple servers, you should select “database” so that a user’s session data is available regardless of which server his/her request is served from. More info about this from Ashley Schroderat Magebase.com. [...]

  28. .: Salles Notícias :.
    April 9, 2013 at 8:02 am /

    [...] Use the correct session storage, choose file system or database (during setup). Most installations should use “file system” because it’s faster and doesn’t cause the database to grow. But if your site will run on multiple servers, you should select “database” so that a user’s session data is available regardless of which server his/her request is served from. More info about this from Ashley Schroderat Magebase.com. [...]

  29. 101 maneiras de acelerar o seu Site Magento | .: Salles Notícias :.

    [...] Use o armazenamento de sessão correta, escolha do sistema de arquivos ou banco de dados (durante a instalação). A maioria das instalações deve usar "sistema de arquivos" porque é mais rápido e não fazer com que o banco de dados para crescer. Mas se o seu site vai rodar em servidores múltiplos, você deve selecionar "banco de dados" para que os dados de sessão do usuário está disponível independentemente do servidor sua / seu pedido é servido a partir.Mais informações sobre este de Ashley Schroder em Magebase.com . [...]

  30. J
    J
    May 27, 2013 at 12:08 am /

    Although the discussion here was un-necessarily heated & personal at times, methinks, it was also filled with extremely valuable technical nuggets. Thanks to all parties, but especially Ben and Steve and Colin (plus Ashley our moderator & originator).

    From my own recent magento-performance-tuning saga, I can confirm that tmpfs *does* sometimes really help. However, in my case, I have not yet tried using tmpfs on magento_root/var/session — which I’d rather not be deleted across any potential reboots — but purely and only on magento_root/var/cache … and that single trick, compared to using the unmodified Linux filesystem (with kernel-level buffers for caching filesystem data no doubt!), or even compared to using APC as an app-level-cache (plus it is also an opcode cache of course), gave me dramatic speedups in terms of pageload times, and especially time-to-first-byte aka TTFB.

    My basic test-case is very simple and straightforward: I measure the time it takes to open the admin-backend login-dlgbox, and the time it takes to login aka load the magento dashboard, and then logout. Without tmpfs, TTFB for the click-login-to-get-the-dashboard was in the neighborhood of 5+5+7 seconds, because on the store in question there were a couple of HTTP redirects prior to the actual dashboard-page getting loaded. In other words, from the enduser’s perspective, it takes over 15 seconds before the dashboard even ‘starts’ to visibly load. Measurements can be taken manually with the network-tab of Opera’s built-in Dragonfly or Firefox’s optional Firebug addon, but I’m also taking auto-measurements with PhantomJS and cron-aka-schedTasks.

    With a 256mb tmpfs mountpoint at magento_root/var/cache, the exact same login-to-dashboard takes 1+1+3 seconds for the TTFB, so from the enduser’s perspective, the page ‘starts’ loading after 5 seconds now (and most magento pages do not have double-redirects so the average TTFB is 3 seconds per page instead of 7 or 8 seconds per page). APC, with an allocation of 1024mb, improved the numbers somewhat… but only to 4+4+6 aka 12 seconds, which is better than 18, but nowhere near the 5 which tmpfs delivers.

    Background: this is a live store, and has been up and running for well over a year. It has ~100 skus. It has <1000 uniques/day. It is the latest magento v1.7.0.2 community edition, on Apache (mod_php rather than FastCGI … but altering that did not improve matters significantly). The owners have added *quite* a few magento-extensions to the store: several performance-related ones (trying to solve the problem that tmpfs eventually beat), but mostly feature-related ones (orders++ notifications++ newsletter++ facebook++ amazon++ etc), with somewhere in the neighborhood of fifty third-party extensions installed. Hardware seems well-suited to the task at hand; but I'm testing on a less-well-equpped VPS as a devbox, with root, and speed is similar.

    Figuring out magento-related performance is definitely a big headache, even for smaller stores. Sonassi says that if you see a performance benefit from tmpfs, then you have an I/O problem, typically caused by some extension that is hammering magento_root/var/session. But how to find the culprit? Does anyone have tools to suggest? Preferably tools that can be used against a live production server, without needing the root password? (Not a requirement for my scenario, but 'most' magento stores will be that way.)

    As for the particulars of my situation, magento_root/var/cache *is* definitely hammered by magento itself, which builds these massive XML trees and then writes them to disk, from what I've read on the interweb… and tmpfs can do wonders there. Interestingly, I have two slightly-differing subdirectories, each containing a magento-store instance… tmpfs speeds one of them up (in the dramatic way mentioned above), and does nada for the other subdir. I have no idea why, as yet. Back to my profiling.

    p.s. Hijacking the thread, of course, since the original topic was what-magento-session-store-is-optimal. Sorry.

    p.p.s. Resurrecting a dead thread, dormant for almost a year. Shamelessly!

    p.p.p.s. Pre-invoking godwin's law: anybody who uses magento, is secretly a nazi. Take that, netiquette.

    Reply
  31. Kalen
    Kalen
    June 25, 2013 at 6:42 am /

    Thanks everyone for the useful information. I feel like I just took Magento Session Storage 101 after finally reading this full thread.

    @J, in regard to checking whether 3rd party extensions may be causing session storage bloat, perhaps all you would need to do would be to inspect the size of your session_data serialized data? I think what Ben was saying was that if 3rd party extensions were writing way too much to session storage, then that could be causing the I/O problem.

    Reply
  32. J
    J
    June 26, 2013 at 11:17 pm /

    Kalen, our specific problem was with the magento_root/var/cache/* subfolder of our magento-install. We are using the filesystem-configuration-flavor of magento for storing session data, which means it lives in magento_root/var/session/*

    You can also configure magento to store that info in the session_data column of the

     $this->_sessionTable 

    , from quick googling, but that isn’t what *we* were doing on our particular site. Moreover, our performance woes were only related to the cache-subfolder (using a tmpfs ‘ramdrive’ for that subfolder alleviated most of the trouble). I didn’t find it necessary to create a tmpfs mountpoint for the session-subfolder — if memory serves, I *did* actually try that config, with a tmpfs mount for cache-subfolder and a separate tmpfs for session-subfolder, but performance was not significantly different, so at the end of the day I just used tmpfs for the cache-subfolder.

    Anyways, the gist of my question about tracking down which extension is responsible for our (former) performance problems is somewhat answered by your suggestion: by running our particular magento-instance under xdebug, and carefully monitoring the filesizes of the magento_root/var/cache subfolder contents, I could eventually figure out which extension(s) were the culprit. Of course, I’d also have to run a similar debugging-sequence against a clean install of magento, to figure out which modifications to magento_root/var/cache were “expected” and which were the “bad” ones.

    Reasonably painful and tedious, from the sounds of it! I was hoping that somebody could suggest a way to track down the trouble without me needing to go through all that work — a shortcut, in other words. I have kcachegrind working via xdebug… but I’m not familiar enough with that to figure out if I can extract just filesystem-mods to some particular folder, and even then, tracking down which magento-extension is actually making said mods is tricksy (kcachegrind doesn’t say). There is a linux tool called strace, which kinda-sorta would do what I am looking for… is there a way to apply strace to magento “easily” for my intended purpose, or is there some strace-equivalent that works more “easily” with magento?

    Clearly, I’m a bit disappointed with the toolset I know to exist: firebug + built in varien profiler does not seem to be enough to diagnose magento performance troubles (not enough server-side detail), and xdebug + kcachegrind is not very straightforward (too much server-side detail). That doesn’t mean it is the tools — maybe it is just me, not knowing how to use them well enough. There are some magento plugins out there, related to debugging — does anybody have suggestions on what methods or tools will help me improve on firebug’s visibility, and/or improve on xdebug’s grokability?

    Reply
  33. Ben
    June 26, 2013 at 11:24 pm /

    If you saw a performance improvement from mounting `./var/cache` on a `tmpfs` – then its caused by one of two things,

    1. Swollen cache (ie. broken cache tags, duplicate cache data). This has been a bug in Magento since day 1. If you leave the cache to build up and up without emptying it – eventually it will grow so large that simple operations take forever because they are so I/O intensive (because of the thousands of nested files in the `./var/cache` directory). This is the main reason people use Memcache, not because its necessarily faster – but because it has a maximum size and TTLs on data, it is self-pruning (with the caveat it doesn’t support tags).

    2. Cheap hosting. HDD I/O will always be a bottleneck and if you’re using a VPS/Cloud set up – then I/O will always be constrained, so either that of competing with other VPS’s on the same local disks, or the bandwidth of the NIC connecting to the SAN, to the SAN itself.

    Don’t waste your time debugging a bad environment. Move to a different host and see if your issues still exist.

    Reply
    1. J
      J
      June 28, 2013 at 7:52 am /

      Greetings, Ben — yes, I was able to get a significant speed-boost from tmpfs for magento_root/var/cache … but be aware, the site was already in a seriously poor state when they brought me in to try and help with the performance issues. Opening the login-dlgbox (a background image and a couple of textboxes) at foo.com/storedir/index.php/admin was taking ~8 seconds, and after installing APC, was still taking ~5 seconds. These are time-til-first-byte, not time to render the entire page. Tmpfs kludge brought that down to 4 secs TTFB to open any backend pages.

      Note that frontend pages were still reasonably quick (1 or 2 secs time-til-1st-byte), on live store as well as on devbox1 and devbox2, problem is just backend. Also worth noting is that the >4 second TTFB applies to every round-trip: pages on the magento-backend that involve 302 redirects of 20 bytes content-length were taking just as long (time-til-first-byte portion anyways) as pages with 100kb of html. Final nota bene, I did not install magento from scratch on devbox1 and devbox2, but merely gzipped the subtree from the live store, and dumped the mysql db from the live store, for my copies.

      Installing tmpfs kludge on devbox2 sped up the backend noticeably, with time-til-first-byte on the backend now under 2 seconds … when I tried to make the same tmpfs fix work elsewhere, though, I ran into troubles deploying it again (prolly a typo in my linux bash-prompt commands? but not sure).

      At the moment we have decided to saw off our foot to save the leg — we hired a firm to perform a clean magento install, and then a clean export of just our bare customer/product/etc data into CSV, for import into a clean mysql install. Along the way, we are upgrading our theme, and replacing a bunch of crufty magento-extensions with some custom code. Pretty painful to the wallet, but might be the best way to go. Sigh. Anyways, I still have access to devbox2 with the tmpfs kludge, if anybody has questions or suggestions. I never narrowed down the exact cause of the troubles, though I suspect some magento-extension (or possibly multiple extensions ‘fighting’ each other) as the root cause.

      Reply
    2. J
      J
      June 28, 2013 at 8:12 am /

      Arrgh… the forum-software ate some of my post, because I used the left-angle-bracket to mean less-than. Here is the paragraph that was snipped, please mentally insert it into my most recent post, above.

      [first paragraph from post above]…Tmpfs kludge brought that down to under 2 secs.

      [lost paragraph goes here] We tried your suggestion #1, and in fact they have a CacheCleaner.php scriptfile in their cron, which clears ./var/cache every few days. We tried suggestion #2 also, copied our magento-instance to a fresh devbox1 at another provider (shared server like our live store), and then to another fresh devbox2 (at the original provider but vps rather than shared). The numbers jiggled around a bit, but it was still taking ‘forever’ i.e. greater than 4 secs TTFB to open any backend pages.

      [please continue in post above] Note that frontend pages were still reasonably quick…

      Reply
  34. Ben
    June 28, 2013 at 9:13 pm /

    @J – seriously. Just change hosts.

    If you have the same issues with a clean store, then your environment is inadequate.

    Reply
  35. J
    J
    June 30, 2013 at 4:30 pm /

    @Ben, per my paragraph above (the one eaten by the forum-software), we did change hosts, but without joy. Here are the hosting environments and configurations we have tried, at one point or another:

    box#0, webhost#A, shared server, with and without APC
    box#1, webhost#B, shared server, no special caching
    box#2, webhost#A, vps slice, with and without eaccel & tmpfs

    Nothing helped (APC / eaccel / apacheTweaks / magentoSpeedupExtensions / magentoConfigTweaks) except tmpfs. However, I still do not know *why* it helped… specifically.

    On box#2, the vps, we have installed multiple magento-store-instances, but the relevant test-instances are:

    store#2_x, copy of live store (gzip + mysqldump), with eaccel but not tmpfs == 8 second TTFB, 11 second pageload
    store#2_y, copy of live store (gzip + mysqldump), with eaccel and also tmpfs == 2 second TTFB, 5 second pageload
    store#2_z naked magento install using no extensions, with eaccel but not tmpfs == 1 second TTFB, 4 second pageload

    The php/phtml/sql of our live store (which is running Too Many extensions) is the problem, not the hardware environment. Current plan is to slowly rebuild/recreate something which looks and behaves — in most ways — like the currently-live store, but this time starting on top of the speedy store#2_z, measuring speed as we install each new piece. That said, I’m still quite curious what the root cause of the trouble was in store#2_x that was ‘fixed’ by the tmpfs kludge.

    Reply
  36. Ben
    June 30, 2013 at 8:49 pm /

    @J . Perhaps I should have been clearer.

    Change hosts to a provider that knows what they are doing and that is appropriate for you store specification.

    Drop me an email, I’ll sort out a demo for you.

    Reply
  37. J
    J
    July 1, 2013 at 1:58 am /

    @Ben, thanks — wilco

    Reply
  38. Shaun
    August 1, 2013 at 8:26 pm /

    Firstly an excellent tutorial by Ashley Schroder.

    I have recently had a bit issue with Magento speed on a VPS (4 core + 2g Ram). Site was working perfectly (only 1500 products) and then suddenly started getting MySQL timeouts. These then ran out of control until the site was unusable within 2 weeks of first noticing the problem. I am not a Magento expert (rely on the advise of forums) and was at logger heads with my hosts support team to resolve the issue.

    The problem – core_session which had ballooned to 1.6 gig. As soon as I truncated core_session the site began to zoom again. I have now swapped to a file session storage and all issues seem to be resolved.

    Knowing the issues with core_session I am amazed that there is no core programming within Magento to look after this table.

    Reply
  39. « Application & Program Tips « Teition Solutions

    [...] at Sonassi who has had quite in depth discussions about this in various forums (particularly the megabase forum, which I’d highly recommend reading to fully understand the difference between using a db, or [...]

Add a Comment & Join the Discussion

Insert small snippets of code by using [code]{your_code_here}[/code]
For larger code blocks please use http://pastebin.com and paste your link.

You may also use the following HTML in your comment: <a href="" title=""> <abbr title="">
<acronym title=""> <blockquote cite=""> <cite> <em> <strike> <strong>