Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: Running Mysql DB in Memory.

  1. #1
    Join Date
    Jun 2008
    Location
    Portugal
    Posts
    234

    Default Running Mysql DB in Memory.

    Hi,


    My question is only theoretical, I do not know if someone has ever made such a setup or if it even possible.

    My idea is the following one:

    I would like to run MySQL in memory in order to take advantage of the very high speed IO.
    I know how to tune a MySQL server and I also know that making a very big memory cache for InnoDB tables allows the MySQL engine to optimize memory usage and if the size on cache is big enough it is like if the majority of the IO is done using memory.
    Nontheless I would like to really find a way to make use only RAM ... Something like mounting /var/lib in a tmpfs and then using a script in order to synchronize it with the HDD for persistence.

    The ideal setup would be a system that would mount /var/lib in memory (the usual place where MySQL places the db files) and allowed for a synchronization with the actual disk for obvious persistence reasons.

    In such a way that the system boots from hard disks ... mounts partitions in memory ... synchronizes partitions with local hard drive partitions and then some script could make the reverse memory/tmpfs synchronizes to disks.

    What applies to /var/lib could also be done with other partitions ... memory size is not a problem ..


    Is this possible?

    Anyone tried this before ?


    Regards.

  2. #2
    Join Date
    Jun 2008
    Location
    West Yorkshire, UK
    Posts
    3,439

    Default Re: Running Mysql DB in Memory.

    Doesn't memcached meet your needs?

  3. #3
    Join Date
    Jun 2008
    Location
    Portugal
    Posts
    234

    Default Re: Running Mysql DB in Memory.

    Hi,

    Quote Originally Posted by john_hudson View Post
    Doesn't memcached meet your needs?
    Simple answer no.
    Memcached is not a database.
    It does indeed help no doubt by I would like to try a Lightning fast solution ... one that would not need memcached. And it seems to me that placing the datafiles in memory is actually very simple in Linux ... Actually tmpfs is usually Half (!!!) the computer ram memory .. so Why not use it to a good purpose ?
    In todays pc' s 16GB are not a problem, so in development databases this is more then enough to run mysql ...

    I know it is easy to use the tmpfs ... one needs to mount it in say /mnt/memdir and maybe do on boot or before using mysql a rsync -a /var/lib/mysql/ /mnt/memdir

    Then change the data directory on my.conf file and that's it.
    Before shutting down ... and this is really the problem ... one must back-up the database and also synchronize say /mnt/memdir with the usual /var/lib/mysql or somewhere else appropriate for this purpose.

    Of course if something goes down all data is lost ... and that is a really big problem where the only solution is to make regular backups ... but this defeats the purpose of low I/O ...

    Regards.

  4. #4
    Join Date
    Jun 2008
    Location
    UTC+10
    Posts
    9,686
    Blog Entries
    4

    Default Re: Running Mysql DB in Memory.

    Maybe you want the memory storage engine:

    MySQL :: MySQL 5.1 Reference Manual :: 13.9 The MEMORY Storage Engine

    If not you could develop your own storage engine.

  5. #5

    Default Re: Running Mysql DB in Memory.

    I believe there are several in-memory MySQL engines available, such as Blackray:

    BlackRay - Wikipedia, the free encyclopedia

    In-memory database - Wikipedia, the free encyclopedia

    Cheers,
    Pete

  6. #6
    Join Date
    Jan 2009
    Location
    Switzerland
    Posts
    1,529

    Default Re: Running Mysql DB in Memory.

    Code:
    I would like to run MySQL in memory in order to take advantage of the very high speed IO.
    Have a look at MySQL Cluster. It is designd to run in memory and can be installed on a single machine as well (but loosing some of its advantages).
    Technology is 'stuff that doesn't work yet.' -- Bran Ferren

  7. #7
    Join Date
    Jun 2008
    Location
    Portugal
    Posts
    234

    Default Re: Running Mysql DB in Memory.

    Hi Ken,


    Quote Originally Posted by ken_yap View Post


    ummm ... not exactly what I was thinking.
    I have to be more precise.
    The Memory engine creates tables in memory, so it is for all due purposes like HSQLDB and Derby and the like (when used in memory mode only).
    There is also no persistence .
    Once the computer powers down ... so does all data on those tables.
    One can indeed make a similar setup as the one I was thinking ... when the application boots load the data into the Memory tables from other backup tables or from file.
    Both procedures .... save into file on shutdown and load from file on boot have to be made ...
    In that sense it is like HSQLDB, Derby and the like.
    Very good db's no doubt but if I already have all code optimized and ready for MySQL why change ? If a simple setup like using tmpfs can solve the problem.


    If not you could develop your own storage engine.
    wow ... hold your horses Ken ... !!!
    I do not know if I really have the talent necessary for that
    I do really like to take advantage of what people more focused on the issue has to offer
    And if linux can provide a functionality that solves the problem ... excellent!

    I am testing the thing I will feedback in a couple of hours about my small tests ...


    Regards.

  8. #8
    Join Date
    Jun 2008
    Location
    San Diego, Ca, USA
    Posts
    10,920
    Blog Entries
    2

    Default Re: Running Mysql DB in Memory.

    I think I see what you're really asking about...

    Be aware though that your query conflicts with most database requirements... Database engines not only have to be fast, but usually you can't sacrifice reliability, so the more you intend to move data into memory even temporarily you risk the data's integrity should there be a power glitch.

    So, might I recommend...

    You can create OLAP cubes and "views" on some database engines as virtual data stores. Note though that these virtual stores do not ordinarily write back to the original data so are used mainly for "read only" and hardly ever for "write" operations.

    You can deploy a copy of the data to a virtual disk, but you severely risk the data's integrity. You'd have to at least invest heavily in critical backup power systems (both within the machine and outside) to decrease the risk but there's no getting around the overall increased risk.

    If you deploy either of the above instances and wish to "write back" changes to disk, many database clustering architectures support "lazy write backs" which generally means that changes can be written as a lower priority process that won't slow down the overall performance while still providing for the atomicity of transactions... which means in plain language that if a disruption occurs then you'll be guaranteed to lose some of the latest data but the integrity of your databases are preserved.

    Last comment...
    Considering how cheap disks are today and some of the advances in massively extendable/expandable databases, both relational (like MySQL) and non-relational (like Cleopatra) which don't have to risk your data, I'd recommend you first list your project objectives and requirements in concrete terms, then take a look at what technical solution might best address your needs.

    HTH a bit,
    Tony

  9. #9
    Join Date
    Jun 2008
    Location
    Portugal
    Posts
    234

    Default Re: Running Mysql DB in Memory.

    Hi,

    Quote Originally Posted by tsu2 View Post
    I think I see what you're really asking about...

    Be aware though that your query conflicts with most database requirements... Database engines not only have to be fast, but usually you can't sacrifice reliability, so the more you intend to move data into memory even temporarily you risk the data's integrity should there be a power glitch.

    So, might I recommend...

    You can create OLAP cubes and "views" on some database engines as virtual data stores. Note though that these virtual stores do not ordinarily write back to the original data so are used mainly for "read only" and hardly ever for "write" operations.

    You can deploy a copy of the data to a virtual disk, but you severely risk the data's integrity. You'd have to at least invest heavily in critical backup power systems (both within the machine and outside) to decrease the risk but there's no getting around the overall increased risk.

    If you deploy either of the above instances and wish to "write back" changes to disk, many database clustering architectures support "lazy write backs" which generally means that changes can be written as a lower priority process that won't slow down the overall performance while still providing for the atomicity of transactions... which means in plain language that if a disruption occurs then you'll be guaranteed to lose some of the latest data but the integrity of your databases are preserved.

    Last comment...
    Considering how cheap disks are today and some of the advances in massively extendable/expandable databases, both relational (like MySQL) and non-relational (like Cleopatra) which don't have to risk your data, I'd recommend you first list your project objectives and requirements in concrete terms, then take a look at what technical solution might best address your needs.

    HTH a bit,
    Tony
    I get precisely your idea.
    But mine will be a lot more simplistic:
    Get a 1U server with 256GB Ram +
    Only 2 hard drives to serve as storage to backup. And another 1U slot is a UPS and connects to the server. As soon as there is a power failure I can place a heartbeat/cron system that checks the output/input of the PSU in order to backup the db, stop the application and things will be ok.
    This does lower a lot the level of availability of the system.
    Every time there is a power failure ... the system must be stopped somehow and the database backup to disk.

    But also remember ... there is a limit to the number of transaction on a InnoDB table on hard drive (IOP...) ... that limit is only possible to surpass with memory or SSD hardrives ...

    Regards.

  10. #10
    Join Date
    Jun 2008
    Location
    San Diego, Ca, USA
    Posts
    10,920
    Blog Entries
    2

    Default Re: Running Mysql DB in Memory.

    Quote Originally Posted by keyb_user View Post
    Hi,



    I get precisely your idea.
    But mine will be a lot more simplistic:
    Get a 1U server with 256GB Ram +
    Only 2 hard drives to serve as storage to backup. And another 1U slot is a UPS and connects to the server. As soon as there is a power failure I can place a heartbeat/cron system that checks the output/input of the PSU in order to backup the db, stop the application and things will be ok.
    This does lower a lot the level of availability of the system.
    Every time there is a power failure ... the system must be stopped somehow and the database backup to disk.

    But also remember ... there is a limit to the number of transaction on a InnoDB table on hard drive (IOP...) ... that limit is only possible to surpass with memory or SSD hardrives ...

    Regards.
    If your database is small enough, or at least the working part of your database... then it might be possible to deploy on a single server as you propose... But depending on your database application typically you'd do better to deploy a proper RAID system (6, 10 or 01) and take a look at your typical use and execution plan... for a bigger system consider clustering on multiple machines.

    You really need to properly analyze your database needs before proposing hardware configurations,

    Provisioning a Database Server properly is a very big topic, recommend you do a little bit of research. Storage I/O is only one possible bottleneck, there are also CPU and RAM issues, reliability and availability on multiple levels, more

    If possible I also recommend you consult with the architects of your dB. In my experience it's far more important for the dB to be architected and optimized as best as possible, then from that try to understand the hardware requirements and then provision properly. If you don't do that or do it in the wrong order, you'll likely end up wasting a ton of money and likely getting a ton of unacceptable performance along the way.

    A shortlist of stuff that should be part of your analysis
    Short term vs intermediate term vs long term, how fast and far might grow and need to design for?
    Application needs, types of queries. Run actual tests if possible
    QoS needs, can the application be down for maintenance and/or emergencies?

    And at the beginning and again at the end, How Much Money are you able to or want to spend to achieve your objectives?

    HTH,
    Tony

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •