-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathCLI for HyperSpace
345 lines (238 loc) · 11.2 KB
/
CLI for HyperSpace
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
# aiOS CLI
## Overview
`aios-cli` is a command-line interface to access similar functionalities as the [aiOS desktop app](https://aios.network/). The CLI is a nicer UX if you're a developer and tend to stay in the terminal or would like to run a node on servers as this does not require a desktop environment to run.
The CLI has a lot of commands, but the basic idea is that it allows you to run your local models (for personal inferences), host your downloaded models to provide inference to the network and earn points, and to use models other people are hosting for personal inference.
## Installation
To install on all platforms you can use our install script (located here in the repo or) available hosted on our download endpoint. These scripts will download the latest release, install any required GPU drivers, and move the binary to somewhere in your `PATH` so that you can access `aios-cli` globally. You can learn more about how the script works [here](/scripts/README.md).
While the script is the recommended way to install, you can also download the binaries directly from the releases section of this repository.
### Linux
```shell
curl https://download.hyper.space/api/install | bash
```
### Mac
```shell
curl https://download.hyper.space/api/install | sh
```
### Windows
You must be running in an Administrator PowerShell for both installation and uninstallation scripts to work on Windows
```shell
# If you have a real version of `curl` (i.e. something that returns a valid version when you do `curl --version`)
curl https://download.hyper.space/api/install?platform=windows | powershell -
# Otherwise
(Invoke-WebRequest "https://download.hyper.space/install?platform=windows").Content | powershell -
```
## Uninstallation
Uninstallation is similar but just change the endpoint to `/uninstall`
### Linux
```shell
curl https://download.hyper.space/api/uninstall | bash
```
### Mac
```shell
curl https://download.hyper.space/api/uninstall | sh
```
### Windows
```shell
(Invoke-WebRequest "https://download.hyper.space/uninstall?platform=windows").Content | powershell -
```
## Docker
There are 2 pre-built docker images available, one on CPU that will install and serves Mistral 7B and one that requires an Nvidia GPU that installs and serves Llama3.
- [`cpu-mistral-7b`](https://hub.docker.com/repository/docker/kartikhyper/aios)
- [`nvidia-llama-3`](https://hub.docker.com/repository/docker/kartikhyper/aios-nvidia)
Make sure that the environment you run the Nvidia image in has `nvidia-container-toolkit` installed and selected as the default runtime.
## Usage
```
aios-cli [OPTIONS] <COMMAND>
```
## Example
Since there's a lot of commands coming up here is a basic example of some common use cases:
```shell
# Start the actual daemon
aios-cli start
# See what models are available
aios-cli models available
# Install one of them locally
aios-cli models add hf:TheBloke/Mistral-7B-Instruct-v0.1-GGUF:mistral-7b-instruct-v0.1.Q4_K_S.gguf
# Run a local inference using it
aios-cli infer --model hf:TheBloke/Mistral-7B-Instruct-v0.1-GGUF:mistral-7b-instruct-v0.1.Q4_K_S.gguf --prompt "Can you explain how to write an HTTP server in Rust?"
# Import your private key from a .pem or .base58 file
aios-cli hive import-keys ./my.pem
# Set those keys as the preferred keys for this session
aios-cli hive login
# Connect to the network (now providing inference for the model you installed before)
aios-cli hive connect
# Run an inference through someone else on the network (as you can see it's the exact same format as the normal `infer` just prefixed with `hive`)
aios-cli hive infer --model hf:TheBloke/Mistral-7B-Instruct-v0.1-GGUF:mistral-7b-instruct-v0.1.Q4_K_S.gguf --prompt "Can you explain how to write an HTTP server in Rust?"
# There's a shortcut to start and login/connect to immediately start hosting local models as well
aios-cli start --connect
```
## Global Options
- `--verbose`: Increases the verbosity of the output. Useful for debugging or getting more detailed information.
- `-h, --help`: Prints the help message, showing available commands and options.
## Commands
### `start`
Starts the local aiOS daemon.
Usage: `aios-cli start`
### `status`
Checks the status of your local aiOS daemon, shows you whether it is still running.
Usage: `aios-cli status`
### `kill`
Terminates the currently running local aiOS daemon. This can be useful if you find yourself in a broken state and need a clean way to restart.
Usage: `aios-cli kill`
### `models`
Commands to manage your local models.
Usage: `aios-cli models [OPTIONS] <COMMAND>`
Subcommands:
- `list`: Lists currently downloaded models.
- `add`: Downloads a new model.
- `remove`: Removes a downloaded model.
- `check`: Checks if the given model is valid on disk.
- `migrate`: Migrates a model from V0 of aiOS to the new location. It is highly unlikely that you would need to use this now.
- `available`: Lists the models available on the network.
- `help`: Prints the help message for the models command or its subcommands.
### `system-info`
Shows you your system specifications that are relevant for model inference.
Usage: `aios-cli system-info`
### `infer`
Uses local models to perform inference.
Usage: `aios-cli infer [OPTIONS]`
(Additional options and parameters for inference would be listed here)
### `hive`
Runs commands using the Hive servers. For context Hive is what the Hyperspace hosted servers are referred to as.
Usage: `aios-cli hive [OPTIONS] <COMMAND>`
Subcommands:
- `login`: Login with your keypair credentials.
- `import-keys`: Import your keys (either ed25519 PEM file or base58 file).
- `connect`: Connect to the network and provide inference using local models.
- `whoami`: Get currently signed in keys.
- `disconnect`: Disconnect from the network.
- `infer`: Run an inference on the network.
- `listen`: Listen for all hive events.
- `select-tier`: Select a tier to start recieving points
- `allocate`: Allocate the amount of GPU memory you would like to and get automatically put into the best tier for points
- `interrupt`: Interrupt an inference you are currently doing for the network.
- `help`: Print the help message for the hive command or its subcommands.
Options:
- `--verbose`: you can run all the commands with this flag to get more info about any errors
### `version`
Prints the current version of the aiOS CLI tool.
Usage: `aios-cli version`
### `help`
Prints the help message or the help of the given subcommand(s).
Usage:
- `aios-cli help`: Prints general help
- `aios-cli help [COMMAND]`: Prints help for a specific command
## Points
To get points you need to put into a tier (currently ranging from best to worst as 1-5).
Each tier has some required models that you need to download and register to the network and certain amounts of GPU memory:
- `1` : `30GB`
- `2` : `20GB`
- `3` : `8GB`
- `4` : `4GB`
- `5` : `2GB`
You can see what models you need by attempting to `hive select-tier` or by running `hive allocate` with the amount of VRAM you want to provide.
Here's a full workflow of using the CLI to start receiving points:
```shell
# Run this to see what models are required
aios-cli hive select-tier 5
# Download a required model
aios-cli models add hf:TheBloke/phi-2-GGUF:phi-2.Q4_K_M.gguf
# Make sure it's registered
aios-cli hive connect
aios-cli hive select-tier 5
# To check your current multiplier and points
aios-cli hive points
```
## Updates
When you run `start` the CLI will be constantly polling and checking for updates as this software is in an early version and it is likely that there are breaking changes made at the network level that can make your node obsolete. These checks for updates and whether they were successful or not will show up in your logs for troubleshooting if you think something has gone wrong in the process.
To ensure that you are on the latest version or update while not in a started state, just run the `version` command while connected to the internet and the CLI will automatically check and update itself. If for some reason that's not working you can re-run the installation steps and the script will install the latest version.
## Troubleshooting
For help with issues please make sure to attach the most recent few log files. These can be found at:
- `linux`: `~/.cache/hyperspace/kernel-logs`
- `mac`: `~/Library/Caches/hyperspace/kernel-logs`
- `windows`: `%LOCALAPPDATA%\Hyperspace\kernel-logs`
## Support
Feel free to open an issue here in the GitHub if you run into any issues or think the documentation can be improved.
The CLI uses the popular clap crate for argument parsing and the reqwest crate for making HTTP requests to the RPC.
use clap::{App, Arg, SubCommand};
use reqwest::Client;
use serde_json::{json, Value};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
// Define the CLI structure
let matches = App::new("Infernet CLI")
.version("1.0")
.author("Your Name <[email protected]>")
.about("CLI to interact with an RPC endpoint")
.arg(
Arg::with_name("rpc")
.short("r")
.long("rpc")
.value_name("RPC_URL")
.help("Sets the RPC URL")
.takes_value(true)
.required(true),
)
.subcommand(
SubCommand::with_name("call")
.about("Makes an RPC call")
.arg(
Arg::with_name("method")
.short("m")
.long("method")
.value_name("METHOD_NAME")
.help("Specifies the RPC method to call")
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name("params")
.short("p")
.long("params")
.value_name("PARAMS")
.help("JSON string of method parameters")
.takes_value(true)
.required(false),
),
)
.get_matches();
// Get the RPC URL from the arguments
let rpc_url = matches.value_of("rpc").unwrap();
// Match the subcommand
if let Some(matches) = matches.subcommand_matches("call") {
let method = matches.value_of("method").unwrap();
let params: Value = matches
.value_of("params")
.map(|p| serde_json::from_str(p).unwrap_or(json!([])))
.unwrap_or(json!([]));
// Make the RPC call
let result = rpc_call(rpc_url, method, params).await?;
println!("RPC Response: {}", result);
} else {
println!("No subcommand provided. Use --help for more information.");
}
Ok(())
}
// Function to make an RPC call
async fn rpc_call(rpc_url: &str, method: &str, params: Value) -> Result<Value, Box<dyn Error>> {
let client = Client::new();
let payload = json!({
"jsonrpc": "2.0",
"method": method,
"params": params,
"id": 1
});
let response = client
.post(rpc_url)
.json(&payload)
.send()
.await?
.json::<Value>()
.await?;
if let Some(error) = response.get("error") {
Err(format!("RPC Error: {}", error).into())
} else {
Ok(response.get("result").cloned().unwrap_or(json!(null)))
}
}