NoseyNick's Nebula - an Artemis SBS server-in-the-cloud

"Nebula" is my Artemis SBS server in the cloud. Specifically, it's a bunch of scripts that can kick off an AWS or Google Cloud server running a fresh untouched Ubuntu Linux, will install WINE and a bunch of other pre-requisites, install and configure Artemis, run it, click "server", and be ready for your players to connect to within 5 minutes. You can then browse to the server and remote-control it over the web, you can connect your own VNC client, or experts can connect using only SSH - there are some commands installed to "click buttons for you".


Step 1: Pick your region and VM size (once)

Unfortunately an Artemis server requires a VM with a spec that's too big for the AWS "free tier". Your VM is going to cost you some money to run, however I've found a spec that seems to run Artemis OK[*] and supports a reasonable number of clients/players for 17c/hr. I think 50c for 3hrs of Artemis is OK, isn't it? Remember how far 50c used to go at the arcade? When you're done with it you can completely nuke it, there's no disks to keep around so no cost when NOT using the server.

Your "region" should probably be chosen according to where most of your players are located, with a bit of an eye on price. See also Varun Agrawal's AWS ping test or Jesús Federic'sAWS latency test - you may want to ask all your players to try one for a few minutes and collect the results.

For price comparisons, Start at AWS's "on demand" pricing page, or the neat tool. You want 4 vCPUs, maybe 8 for big multi-ship "fleet" games (Intel, NOT ARM, AWS Graviton, or even AMD). Usually the "compute optimised" are cheapest. Nebula is best tested on c5.xlarge and c5.2xlarge. You should obviously check prices for yourself, I am not going to pay your bill for you, but last time I checked (2021-05-13) some appropriate prices (all vcpu=4) were:

0.170USD/hr: vcpu=4 ecu=20 memory=8GiB region=ap-south-1 instanceType=c5.xlarge
0.202USD/hr: vcpu=4 ecu=20 memory=8GiB region=eu-south-1 instanceType=c5.xlarge
0.204USD/hr: vcpu=4 ecu=20 memory=8GiB region=us-gov-east-1 instanceType=c5.xlarge
0.186USD/hr: vcpu=4 ecu=20 memory=8GiB region=ca-central-1 instanceType=c5.xlarge
0.194USD/hr: vcpu=4 ecu=20 memory=8GiB region=eu-central-1 instanceType=c5.xlarge
0.212USD/hr: vcpu=4 ecu=20 memory=8GiB region=us-west-1 instanceType=c5.xlarge
0.170USD/hr: vcpu=4 ecu=20 memory=8GiB region=us-west-2 instanceType=c5.xlarge
0.228USD/hr: vcpu=4 ecu=20 memory=8GiB region=af-south-1 instanceType=c5.xlarge
0.202USD/hr: vcpu=4 ecu=20 memory=8GiB region=eu-west-3 instanceType=c5.xlarge
0.182USD/hr: vcpu=4 ecu=20 memory=8GiB region=eu-north-1 instanceType=c5.xlarge
0.202USD/hr: vcpu=4 ecu=20 memory=8GiB region=eu-west-2 instanceType=c5.xlarge
0.192USD/hr: vcpu=4 ecu=20 memory=8GiB region=eu-west-1 instanceType=c5.xlarge
0.214USD/hr: vcpu=4 ecu=20 memory=8GiB region=ap-northeast-3 instanceType=c5.xlarge
0.192USD/hr: vcpu=4 ecu=20 memory=8GiB region=ap-northeast-2 instanceType=c5.xlarge
0.214USD/hr: vcpu=4 ecu=20 memory=8GiB region=ap-northeast-1 instanceType=c5.xlarge
0.204USD/hr: vcpu=4 ecu=20 memory=8GiB region=us-west-2-lax-1 instanceType=c5.xlarge
0.211USD/hr: vcpu=4 ecu=20 memory=8GiB region=me-south-1 instanceType=c5.xlarge
0.262USD/hr: vcpu=4 ecu=20 memory=8GiB region=sa-east-1 instanceType=c5.xlarge
0.216USD/hr: vcpu=4 ecu=20 memory=8GiB region=ap-east-1 instanceType=c5.xlarge
0.204USD/hr: vcpu=4 ecu=20 memory=8GiB region=us-gov-west-1 instanceType=c5.xlarge
0.196USD/hr: vcpu=4 ecu=20 memory=8GiB region=ap-southeast-1 instanceType=c5.xlarge
0.222USD/hr: vcpu=4 ecu=20 memory=8GiB region=ap-southeast-2 instanceType=c5.xlarge
0.170USD/hr: vcpu=4 ecu=20 memory=8GiB region=us-east-1 instanceType=c5.xlarge
0.170USD/hr: vcpu=4 ecu=20 memory=8GiB region=us-east-2 instanceType=c5.xlarge

Step 2: AWS account (once)

You'll need to get yourself an AWS account ("create a free account"), unless you already have one of course!

You need to create or import an SSH keypair in the right region. AWS seems to need this even if you don't use it. 😀 If you're on Linux you probably already know what SSH is and probably already have an SSH key - you can import your If you're on Windows you might want PuTTY Secure Shell - this is a way to connect securely across the internet to your Nebula server, but you probably don't need it any more - you can just browse to your Nebula server (see below).

You'll want a Network Security Group in the right region. Call it "2010":

Click DONE.

There's no cost for any of the above SO FAR

Step 3: Start your VM

Rookies / cadets who prefer the AWS web console...

  • Fire up the "Launch Instance Wizard"
  • Make sure you're in the right region (drop-down in the top-right)
  • Select Ubuntu Server (Nebula prefers Ubuntu 20.04 LTS. X86 NOT ARM)
  • Choose an Instance Type: c5.xlarge (or whatever you chose above)
  • "Configure Instance Details" (NOT review+launch)
  • "Shutdown behaviour: Terminate". You do not need to keep the VM around, you can nuke it when done, and start again "from scratch" next time
  • Advanced Details... paste this into "User data" (or see Optional Extras for other things that you can set in here):
    wget -t3 -T3 -O- \
      | NAME=275stock VNCPASS=something bash
    It is STRONGLY recommended that you set VNCPASS to something secure otherwise this can be (ab)used by your players or anyone who stumbles across the Web UI remote control and does a little research.
  • Skip to step 6: "Configure Security Group"
  • "Select an existing security group" and choose the "2010" one you made above (or you could make it for the first time - leave SSH/TCP/22 and add Custom/TCP/2010/Anywhere, HTTP/TCP/80)
  • Review and Launch, Launch
  • Choose the SSH keypair you created/uploaded above (or you could make a new (disposable) one at this point)
  • Launch Instance
  • Return to the list of instances, make a note of the "Public IP" of your shiny new server!

Expert engineers who are using Linux and/or able to run BASH scripts and have the AWS CLI installed and configured...

Download Check the variables at the top. You probably want to change SSHKEY=name-of-your-ssh-key, you may want to change REGION=us-east-1 and a corresponding IMAGE_ID=ami-927185ef, and maybe INSTANCE_TYPE=c5.xlarge - see the notes in the top of the script. All except INSTANCE_TYPE can be set in a script if you wish. Some stuff is also driven off of NAME which can be set when you run the script (see below).

Then just run it:

./ 275stock

Make a note of the Public IP it tells you.

Step 4: Wait for it to start

Rookies / cadets who would prefer a web UI...

After a minute or so you should be able to connect a web browser to the Public IP above. Just or whatever. You should see a log summary, which will go through approximately the following steps:

YYYY-MM-DD HH:MM:SS : Web UI install
YYYY-MM-DD HH:MM:SS : Keeping config
YYYY-MM-DD HH:MM:SS : Nebula server booting
YYYY-MM-DD HH:MM:SS : SSH config...
YYYY-MM-DD HH:MM:SS : APT updates and installs...
YYYY-MM-DD HH:MM:SS : Downloading Artemis 271ben bits...
YYYY-MM-DD HH:MM:SS : Making click utils
YYYY-MM-DD HH:MM:SS : VNC setup...
YYYY-MM-DD HH:MM:SS : Making click utils
YYYY-MM-DD HH:MM:SS : Artemis mods and config ...
YYYY-MM-DD HH:MM:SS : Nebula Server is ready!
YYYY-MM-DD HH:MM:SS : DONE - stop the clock!

At this point you should see a link to remote-control your Artemis server using a browser-based remote-control - you'll need to use the password you set in VNCPASS above (you DID set VNCPASS=something, right ?) and then you should see your Artemis server running!

If it DOESN'T work, in particular if you see the message "Artemis failed to start properly" or if you see "Artemis.exe has crashed" in the remote-control, then you may need to restart Artemis. Click "In case of emergency: Restart Artemis" and wait a minute or so.

Worst-case, you could terminate the server and start again. See Step 6: Shutdown below, then go back to Step 3 above to start another VM

Expert engineers should be able to connect to it using SSH (or PuTTY) within a minute, for example if your Public IP was

ssh -L5900:127.1:5900 ubuntu@  # <-- use real IP here
tail -1000f /var/log/cloud-init-output.log  # to watch boot progress

If using PuTTY, you'll want to connect to the supplied IP, user "ubuntu", and arrange for a "local port forward" of port 5900 to 5900

If it works, you'll see a lot of headers like:

#### YYYY-MM-DD HH:MM:SS : Web UI install
... and a lot of other debugging output in between. When done, you should see something like...
#### YYYY-MM-DD HH:MM:SS : Waiting 30 secs for server...
#### YYYY-MM-DD HH:MM:SS : Waiting 29 secs for server...
#### YYYY-MM-DD HH:MM:SS : Waiting 28 secs for server...
tcp    0    0*    LISTEN
#### YYYY-MM-DD HH:MM:SS : DONE - stop the clock!
If it DOESN'T work, in particular if you don't see the "tcp 0 0" line, you may want to hit control-C and restart the entire process:
nebula bash
If it still doesn't work... Sorry, get hold of me (probably on the United Stellar Navy Discord server) and tell me what went wrong. I'll probably appreciate a copy of your /var/log/cloud-init-output.log file from Step 4 above, and/or I might want to log into your server to nose around - I'll discuss how to do that once you get hold of me.

Step 5: Play!

You, and your other players, can point your Artemis clients at the Public IP address.

You're going to need to choose game type, skill level, Terrain/LethalTerrain/FriendlyShips/Monsters/Anomalies etc though, and you're going to need to start the game. There are now 3 ways to do this:

  1. Point a VNC client at, EG vncviewer OR
  2. Browse to the IP address of your server and use the (less secure) web-based VNC remote-control OR
  3. SSH to the machine and run a series of commands to click buttons for you, and (without being able to see them) HOPE it has done what you expected

Some of the special commands available to you are:

Some other useful scripts:

Some regular Linux commands that may be useful:

Step 6: Shutdown

From the VNC remote-control, click "In case of emergency: Power Off". This assumes you followed the advice for "Shutdown behaviour: Terminate"

If you are SSHed into the VM: sudo poweroff - again as long as you followed the "Shutdown behaviour: Terminate" advice

From the AWS web console, make sure you're in the right region (if you forgot, then yes, sorry, you're going to have to check every single region). Click on the VM in the list. Click "Actions", "Instance State", "Terminate".

From the AWS CLI: aws ec2 terminate-instances --region $REGION --instance-ids $ID


Davis has made a video of the above - Thanks Davis! He takes you through using the AWS web console "Launch Instance Wizard", then use of PuTTY and VNC on Windows, and terminating via the AWS console. Some minor updates:

How It Works

Optional Extras produces the price list above. It is not guaranteed to be up-to-date or correct, I'm not responsible for any errors on your AWS bill or anything.

A number of environment variables can be set in "Step 3 Start your VM" or if you are re-running manually in step 4. The examples show NAME=275stock. If you're using from your own machine, you can set them in the environment, or in a which will be read by If you're using the AWS web console, you'll add them in your "User data" between the | and bash


Artemis.exe crashes very occasionally, usually at startup, probably no more or no less often than any other/windows Artemis server to be honest   😕

I usually test by cranking everything up to level 11, "Many Many Many Many Many", making a few dozen connections from another server, and turning on the undocumented AI player (press "E" on the server). This isn't quite the same as testing with real players, the connections aren't "doing anything", but it does prove that the server can handle the same number of in-game objects, and push the packets onto the network. It gets less INPUT than a real game but I'm not sure if there's much I can do about that.

We have tested with fair-sized groups of human players too. "Load average" stats during typical USN games show the machine using about 2.5 - 3.5 of the 4 CPUs allocated to it, with no "steal time", so Artemis seems to not be bottle-necked. AWS still uses "powers of 2" CPUs, so 2 would be too few, 8 would be too many, 4 seems to be the Goldilocks "just right".

Rear Admiral Dave Trinh of the TSN 1st Light Division has referred to this as "our blazing fast Nebula server", has presented Nebula at some trans-Canada events, and is hoping to work up to a full 8-ship fleet with dozens of online players.

Starry commented on the speed, performance, and lack of lag ("even from the UK") when playing some of our test games running in AWS Ohio.

Davis at Eastern Front is testing it out, "Good News! Nosey Nick Waterman has found a way to create a server that cuts the cost of a server by A LOT. our first server was projected to cost ABOUT 60.00 a month now with his creation we are running at .15 an hour of play! I would like to thank Nosey Nick Waterman for all the developing he has done for the community's to enjoy this game in this way" [...] "Fully tested and got 1 FULL game played and only cost .25 soooooo yea".

After most Nebula USN games, I also get my Artemis-Puppy bot to post a quick survey (see, which looks a bit like...

cute puppy     Artemis-Puppy BOT
Thanks Crew! Please rate the performance of this server - it's a bit of an experiment and I'd appreciate feedback:
BPretty good
DNot good
Total votes so far (2019-02-19, - see note The bot itself votes for A-F to make the options appear, making
it LOOK like every letter has at least 1 vote per survey,
but these "Fake votes" have been removed from these totals
A 8   B 10   C 2   D 1   E 0   F 0  

... so it looks like USUALLY the players agree these servers are "Pretty good" to "Marvelous" 😀

Version History

I've not kept a very accurate version history, but some noted changes / milestones are:

2017-12-15 First recorded Artemis-on-WINE-on-Linux-on-AWS tests for USN
2018-01-21 Tested with 2 player ships and 4 fighters for nearly 1hr game
2018-03-01 First recorded use with the name "Nebula"
2018-03-09 First reporting back to Discord
2018-03-22 First archived copy of cloud-init-output.log
2018-04-16 Tested at level 11 with 3 player ships and many fighters
2018-08-02 First Google Cloud tests - Nebula goes multi-cloud!
2018-11-14 First recorded use with Eastern Front (no EF mod)
2019-02-10 Tested in us-east-2 for TSN Canada fleet
2019-02-15 Big 1hr & 45min multi-ship TSN 4LD Canada games - performance was great!
2019-02-16 Added mission downloads especially for nginecho and OpenSpace. Better system load reports. Clarified license / copyright
2019-02-18 Davis for Eastern Front made a HOWTO YouTube vid - Thanks Davis!
2019-02-21 Upgraded from Ubuntu 16.04 to 18.04 LTS. Ubuntu WINE is now fine (and much faster than winehq). Added support for Eastern Front mod + ships + missions. Doc overhaul
2019-02-26 Updated for new URLs / formats
2019-03-03 Downloader improvements. Initial support for TSN-RP mods + sandbox + ships. Tidied, modularised mkclick, keepconf, missions, downloader, SSH config, others. Doc polishing. AWS ping test links
2019-03-04 dumps USERDATA for better debug / re-use
2019-03-07 Simplified / modularised ship name scripts. Further downloader improvements. Improved PublicIP detection. Support for list of MISSIONS=URLs
2019-03-14 TSN-RP ship names updated. New nebula@ email address. More doc updates. Updated AWS prices. Moved to /artemis/nebula URL to reflect more-than-AWS support. More consistent logging to ~/logs/
2019-03-17 Introduced Web UI / VNC! Fixed VNCPASS (now much more inportant!)
2019-03-23 Major doc overhaul, mostly to reflect Web UI, and add this changelog!
2019-03-31 Added support for Starry's Hermes, SHUTDOWN=5hours, more reliable (longer) click-* commands, mini-menu for Power Off / Restart Artemis. TeamSpeak support.
2019-04-11 Added Artemis 2.7.2 beta (stock). Fixed a SHUTDOWN=Nhours bug. Ran TSN 4th Light Division (London, Ontario, Canada) games.
2019-04-14 Added Artemis 2.7.2 beta with Ben's / TSN-RP / EF mods (untested). Upgraded EF Mod to V1.1. Fixed minor html5 bugs in documentation. Fixed covering "back" button.
2019-04-23 Rolled 2.7.1 EF Mod back to previous (unspecified) version to workaround crash bug.
2019-05-01 The announcement of Nebula's first serious 8-ship test!
2019-05-03 Borrowed DaveT's fancy Nebula image from the above announcemt, and installed it into the boot / log screen of running Nebula servers.
2019-05-10 Multi-ship PvP Arena mission game
2019-06-12 Updated for Eastern Front mod v1.2. TSN-RP ship updated. Some minor tidying.
2019-06-14 Fixed an old bug where "apt" was sometimes waiting for interactive input (which will never happen), despite asking it not to. If "APT package installs (may take a minute or so)" actually took MUCH longer, this is why, sorry :-/
2019-06-29 Added link, experimented with a1.xlarge (bad), t3a.xlarge (bad), t3.xlarge (works?)
2019-07-05 Added support for a bunch of versions of Artemis (2.7.0, 2.7.1, 2.7.2) with a bunch of versions of Ben's Mod (4.2.12, 4.3.2, 4.3.4, 4.3.5, and "latest")
2019-07-30 Fixed some obscure new compatibility bug between NoVNC / WebSockify
2019-09-04 Upgraded Teamspeak 3.6.1 to 3.9.1 - TS3 now works again instead of crashing. Added some code to to upload+use your (properly formatted) ~/.ssh/ if no SSHKEY was specified
2019-09-06 Fixed some issues unzipping newer
2019-11-16 Added NAME=tsn-rp-git for latest version from github
2019-12-01 ADDED NAME=271tsn-rp-s-git for special low-poly server version from github.
2019-12-06 Added front/left/right/read/tac/lrs/info options to
2019-12-11 Configs and fleet config for TSN Trans-Canada exercise. Worked around another new(ish) bug in NoVNC.
2019-12-23 Added support for Artemis 2.7.4
2019-12-24 Lots of TCP Tuning... In our December coast-to-coast TSN-Canada 8-ship game, we had some issues with "hung connections", believed to be caused by the DSL/Cable/Mobile internet providers of the clients, but in any case we had many issues with "We lost our [Helm/Weap/Eng] console and can't get back on", and then we saw many game crashes, and had issues with TCP sockets stuck in FIN_WAT / TIME_WAIT issues that made difficult to restart the server too. Our hypothesis was that "stuck (half-closed) TCP connections" were responsible for most, probably all of these issues. It was hard to completely reproduce in later testing, but with deliberately-broken connections, we could somewhat unreliably reproduce the issues. We can't solve the issues on all the clients without asking everyone to upgrade their internet connectivity, but we can make the server more tolerant. It will now kill off any "stuck" TCP connections within about 12-15 seconds, and should also be much better at restarting, with a much lower delay for the FIN_WAT / TIME_WAIT issues too. Since this tuning, we have been unable to reproduce this type of crash, though obviously there are still plenty of other ways to crash Artemis 😕 We unfortunately CAN'T solve "we are locked out of helm and can't get back in", there's only so much we can do on the outside of Artemis without having access to the code on the inside 😕
2019-12-30 Switched web interface background image the awesome "Nick on Engineering" image DaveT made of+for me
2020-02-26 Preliminary support for Artemis 2.7.5
2020-02-29 Some more 2.7.5-compatibility tweaks.
2020-03-01 International TSN-Canada multi-ship game on 2.7.5. Rock-solid, no crashes, no noted client hanging, but certainly no client lock-outs. Some lagginess for transatlantic players fixed by reducing server update rate. Overall: FUN!
2020-08-03 Experimental support for Empty Epsilon instead of Artemis - try setting EE_VER=EE-2020.04.09 with Ubuntu 18.04, or EE_VER=EE-2020.08.07 with Ubuntu 20.04
2020-08-08 Upgraded from Ubuntu 18.04 to 20.04 by default - see Artemis seems fine under WINE, and newer beta EE works (as above). Dropped support for 16.04 - it needed a rather hacky WINE install.
2020-08-30 First scheduled Nebula server, where I pre-arranged in advance for a server to start for DaveT whilst I was away, and email him the credentials to remote-control it. Turns out the server started fine, but the email with the credentials bounced, for a totally trivial fixable reason - won't be a problem again. If you need to a Nebula server, speak to me, I can arrange for one to start even if I'm not around! 😀
2020-10-30 Added support for Eastern Front Mod 3.0.
Made shellcheck clean - should improve security and stability slightly.
2020-11-12 Made 2.7.5 (stock/vanilla) the new default - about time!
2021-02-11 8 Xim carriers, each with 6 Xim Bombers (no shuttles!) - WOW! 😀
2021-03-21 Some fixes the start scripts
2021-03-19 Some TeamSpeak-related tweaks for TSN Canada Joint Forces Operations
2021-03-31 Added Empty Epsilon EE_VER=latest option. Some minor BASH shell fixes (found some that were not yet shellcheck clean)
2021-04-23 Updated some examples from 271ben to 275stock
2021-05-01 Documentation updates, especially around EE_VER and EE_PROXY. Improved debug log collection.
2021-08-09 Fixed some timeouts and retries, after spotting a server trying TWENTY TIMES (slowly) to fetch a mission script that wasn't there!

Coming Soon? Maybe?

(See also my other Artemis tools)