Page 7 of 71 FirstFirst ... 567891757 ... LastLast
Results 61 to 70 of 706
  1. #61
    Yes it works again. Thanks alot :-)

  2. #62
    All ark servers are down. Pls fix.

    Thank you.

  3. #63
    Ark servers updated and back up.

  4. #64
    Another week another server outage.

    Servers are showing up on the list but once again are not working. I guess i will check back in 2 days here to see if they work again.

    Do the servers die for 1 -2 days everytime an update hits? If so, this server is unplayable. No wonder there are 0 players outside of my own tribe right now.

    4 x server offline for 1 -2 days everytime in the last 2 weeks. Is all iam saying.

  5. #65
    Cake! InsaneJ's Avatar
    Join Date
    Jan 2012
    Location
    Cakeville
    Posts
    5,056
    Blog Entries
    21
    Quote Originally Posted by Shacck View Post
    Another week another server outage.

    Servers are showing up on the list but once again are not working. I guess i will check back in 2 days here to see if they work again.

    Do the servers die for 1 -2 days everytime an update hits? If so, this server is unplayable. No wonder there are 0 players outside of my own tribe right now.

    4 x server offline for 1 -2 days everytime in the last 2 weeks. Is all iam saying.
    Servers should be up now.

    If you ask me the way ARK handles updates plain sucks. Every minor update on either ARK or one of the mods completely breaks the server and is automatically pushed to everybody who has ARK installed. With Minecraft mod packs at least we an control when an update is pushed which keeps server and client in sync. With ARK you can update everything, and 5 minutes later another minor update breaks everything and you have to do it all again.

    I don't play ARK so I won't notice there is a problem unless someone tells me. Jiro does play ARK but he can't be expected to be on the server each and every day.

    We all understand it can be frustrating not being able to play on a server you have invested in. This is why we always try to respond as quickly as possible to these posts. As long as there is no working way to automatically detect updates and then update ARK server + mods this will remain an issue I'm afraid.

  6. #66
    Yea it is working now.

    I didnt mean to blame you. I know the way this game updates is crazy.

    Thanks for everything :-)

  7. #67
    Cake! InsaneJ's Avatar
    Join Date
    Jan 2012
    Location
    Cakeville
    Posts
    5,056
    Blog Entries
    21
    No worries

  8. #68

    Server needs a restart or somnthing

    Everything is extremly slow and half the base isnt loading in.

    Restart pls
    Last edited by Shacck; 15th December 2016 at 11:59.

  9. #69

    Server needs a restart or somnthing

    Everything is extremly slow and half the base isnt loading in.

    Restart pls

    Your website appears to be laggy as hell. Something is going on.

  10. #70
    Cake! InsaneJ's Avatar
    Join Date
    Jan 2012
    Location
    Cakeville
    Posts
    5,056
    Blog Entries
    21
    We've been having intermittend issues with VMware ESXi where disk latency would creep and eventually become so high that the performance would plummit. Having high disk latency means that reading and/or writing to disk takes longer. Meanwhile the system has to wait for confirmation that something was read from or written to disk causing the slowdowns.

    Our server runs on an Areca 1680i raid controller which has a dual core PowerPC CPU @1.2GHz and 4GB cache memory. It's a fast raid controller. However it's not on VMware's HCL which means there is no support for it from VMware. Areca has drivers for VMWare and maintains them with regular updates. They just don't pay VMware to certify their products.

    Basically Areca is great for workstations, servers and the like. Just not for professional VMware installations due to lack of support. But since we already had this card and it's an $800 piece of kit, I wasn't about to get rid of it after we moved from Xen to VMware a couple of months ago. The problem with our setup is that VMware assumes that it runs on systems that have newer raid controllers which have larger queue depths and SAS drives which have larger queue depths than the SATA drives we are currently using. The lower queue depths of our hardware means that VMware tends to saturate the queues with commands for disk I/O. When that happens things slow down.

    To give you an idea. VMware recommends raid controllers with a queue depth of 100 or more. The Areca 1680i has a queue depth of 255 which is OK although more modern controllers have a queue depth up to 1000. The SATA drives we use only have a queue depth of 32 where SAS drives have a queue depth of 254. That's basically why SAS drives are more expensive than SATA drives. For a regular PC you won't notice much of a performance difference, but in multi-user scenarios as with our various virtual machines the difference is significant. Since we run most of our VMs from a raid-1 array for two 3TB drives and a raid-5 array of SSDs, we run out of queue depth to the drives pretty quickly. Again, this is mostly a VMware thing. When we ran the Xen hypervisor we never had any issues like this.

    Anyway. To get back to our disk latency issues. I've upgraded the Areca driver from arcmsr-1.30.00.02 to arcmsr-1.30.00.03. Unfortunately I couldn't find the change log for this minor update so I have no idea what's changed. I might just be compatibility with the new VMware ESXi 6.5, or it might have solved our issues.

    I've also upgraded from ESXi 6.0 to ESXi 6.5. So far things seem to be doing OK. Our host has a 14 core / 28 thread Xeon CPU with 96GB RAM and is running a bunch of virtual machines. This is a graph of the disk performance as monitored by ESXi right after booting the system, starting all the VMs and starting all the Minecraft and ARK servers.
    Name:  ESXi_disk_usage_after_reboot.png
Views: 62
Size:  26.3 KB

    As you can see disk I/O isn't very high, even during system startup. What's more important is the disk latency graph (light blue). It seems to stay below 30ms which isn't great, but it's not nearly as bad as it was when we suffered through the extreme slow downs. During those periods we'd see disk latency of in excess of 1000ms. Ideally you'd want a disk latency lower than 5ms. But for that we'd need to get SAS drives which are quite expensive. The graph above should be good enough for our purposes though. We'll see how it goes from here and keep an eye on things.

    Alternatively we could add more SATA drives to spread available disk queues between VMs. But yeah, money doesn't exactly grow on trees. If anything I'd rather replace the current raid-1 array with NL-SAS drives. Those aren't much more expensive than regular SATA, but less expensive than SAS and still provide better queue depths which should help in our case.

    For those who want to read a bit more about VMware and queue depths, check this out: http://www.yellow-bricks.com/2014/06...depth-matters/
    Attached Images Attached Images  

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •