feat(updater): refactor in-memory (#37)

* refactor: updater archive is now fully validated from memory

Signed-off-by: David Lemarier <david@lemarier.ca>

* fix CI

Signed-off-by: David Lemarier <david@lemarier.ca>

* make clippy happy

Signed-off-by: David Lemarier <david@lemarier.ca>

* update documentation and fmt

Signed-off-by: David Lemarier <david@lemarier.ca>

* cleanup and add final doc

Signed-off-by: David Lemarier <david@lemarier.ca>

* fmt

Signed-off-by: David Lemarier <david@lemarier.ca>

* make clippy happy

Signed-off-by: David Lemarier <david@lemarier.ca>

* remove unwanted clone

Signed-off-by: David Lemarier <david@lemarier.ca>

* [ci skip] cleanup

Signed-off-by: David Lemarier <david@lemarier.ca>

* run `http_updater_complete_process` on all platforms

Signed-off-by: David Lemarier <david@lemarier.ca>

* fix CI: `cargo test --all-features` on core tests

Signed-off-by: David Lemarier <david@lemarier.ca>

* fix appimage build

Signed-off-by: David Lemarier <david@lemarier.ca>

* update

Signed-off-by: David Lemarier <david@lemarier.ca>

* chore(deps) Update dependency @types/imagemin to v8 (#2635)

Co-authored-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: Lucas Nogueira <lucas@tauri.studio>
Co-authored-by: lucasfernog <lucasfernog@users.noreply.github.com>
Co-authored-by: Ngo Iok Ui (Wu Yu Wei) <wusyong9104@gmail.com>
Co-authored-by: david <david@lemarier.ca>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: chip <chip@chip.sh>
Co-authored-by: David Von Edge <david.vonedge@smiths.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Amr Bashir <48618675+amrbashir@users.noreply.github.com>
Co-authored-by: Lucas Fernandes Nogueira <lucas@tauri.studio>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Elvinas Predkelis <elvinas.predkelis@gmail.com>
Co-authored-by: edgex004 <edgex004@gmail.com>
Co-authored-by: Barry Simons <linuxuser586@gmail.com>
Co-authored-by: Kris Scott <kscott91@gmail.com>
Co-authored-by: grey4owl <66082492+grey4owl@users.noreply.github.com>
Co-authored-by: cybai <cyb.ai.815@gmail.com>
Co-authored-by: Lucas Nogueira <lucasfernandesnog@gmail.com>
Co-authored-by: Robert Buchanan <robbie.buchanan@ioneyed.com>
Co-authored-by: Kasper <kasperkh.kh@gmail.com>
Co-authored-by: Manuel Quarneti <manuelquarneti@gmail.com>
Co-authored-by: Stef Kors <stef.kors@gmail.com>
Co-authored-by: David D <1168397+davedbase@users.noreply.github.com>
Co-authored-by: Adilson Schmitt Junior <adilsonschj@gmail.com>
Co-authored-by: Bill Avery <wravery@users.noreply.github.com>
Co-authored-by: Julien Kauffmann <90217528+jkauffmann-legion@users.noreply.github.com>
Co-authored-by: Andrea Giammarchi <andrea.giammarchi@gmail.com>
Co-authored-by: ThisSeanZhang <46880100+ThisSeanZhang@users.noreply.github.com>
Co-authored-by: Jonas Kruckenberg <iterpre@protonmail.com>

* Revert "chore(deps) Update dependency @types/imagemin to v8 (#2635)"

This reverts commit c0285e873d.

* [ci skip] fix errors

Signed-off-by: David Lemarier <david@lemarier.ca>

* [ci skip] fix build errors

Signed-off-by: David Lemarier <david@lemarier.ca>

* [ci skip] path `SafePathBuf` tests

Signed-off-by: David Lemarier <david@lemarier.ca>

* allow minisign legacy

Signed-off-by: David Lemarier <david@lemarier.ca>

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: Lucas Nogueira <lucas@tauri.studio>
Co-authored-by: lucasfernog <lucasfernog@users.noreply.github.com>
Co-authored-by: Ngo Iok Ui (Wu Yu Wei) <wusyong9104@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: chip <chip@chip.sh>
Co-authored-by: David Von Edge <david.vonedge@smiths.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Amr Bashir <48618675+amrbashir@users.noreply.github.com>
Co-authored-by: Elvinas Predkelis <elvinas.predkelis@gmail.com>
Co-authored-by: edgex004 <edgex004@gmail.com>
Co-authored-by: Barry Simons <linuxuser586@gmail.com>
Co-authored-by: Kris Scott <kscott91@gmail.com>
Co-authored-by: grey4owl <66082492+grey4owl@users.noreply.github.com>
Co-authored-by: cybai <cyb.ai.815@gmail.com>
Co-authored-by: Lucas Nogueira <lucasfernandesnog@gmail.com>
Co-authored-by: Robert Buchanan <robbie.buchanan@ioneyed.com>
Co-authored-by: Kasper <kasperkh.kh@gmail.com>
Co-authored-by: Manuel Quarneti <manuelquarneti@gmail.com>
Co-authored-by: Stef Kors <stef.kors@gmail.com>
Co-authored-by: David D <1168397+davedbase@users.noreply.github.com>
Co-authored-by: Adilson Schmitt Junior <adilsonschj@gmail.com>
Co-authored-by: Bill Avery <wravery@users.noreply.github.com>
Co-authored-by: Julien Kauffmann <90217528+jkauffmann-legion@users.noreply.github.com>
Co-authored-by: Andrea Giammarchi <andrea.giammarchi@gmail.com>
Co-authored-by: ThisSeanZhang <46880100+ThisSeanZhang@users.noreply.github.com>
Co-authored-by: Jonas Kruckenberg <iterpre@protonmail.com>
This commit is contained in:
david
2021-12-16 06:45:27 -05:00
committed by Lucas Nogueira
parent 24fa21c9b7
commit be096623bf
11 changed files with 352 additions and 280 deletions

View File

@@ -3,8 +3,11 @@
// SPDX-License-Identifier: MIT
use either::{self, Either};
use std::{fs, io, path};
use std::{
fs,
io::{self, Read, Seek},
path::{self, Path, PathBuf},
};
/// The supported archive formats.
#[derive(Debug, Clone, Copy, PartialEq)]
@@ -28,49 +31,65 @@ pub enum Compression {
/// The extract manager to retrieve files from archives.
#[derive(Debug)]
pub struct Extract<'a> {
source: &'a path::Path,
archive_format: Option<ArchiveFormat>,
pub struct Extract<R> {
reader: R,
archive_format: ArchiveFormat,
}
fn detect_archive_type(path: &path::Path) -> ArchiveFormat {
match path.extension() {
Some(extension) if extension == std::ffi::OsStr::new("zip") => ArchiveFormat::Zip,
Some(extension) if extension == std::ffi::OsStr::new("tar") => ArchiveFormat::Tar(None),
Some(extension) if extension == std::ffi::OsStr::new("gz") => match path
.file_stem()
.map(path::Path::new)
.and_then(|f| f.extension())
{
Some(extension) if extension == std::ffi::OsStr::new("tar") => {
ArchiveFormat::Tar(Some(Compression::Gz))
}
_ => ArchiveFormat::Plain(Some(Compression::Gz)),
},
_ => ArchiveFormat::Plain(None),
}
}
impl<'a> Extract<'a> {
/// Create an `Extractor from a source path
pub fn from_source(source: &'a path::Path) -> Extract<'a> {
Self {
source,
archive_format: None,
impl<R: Read + Seek> Extract<R> {
/// Create archive from reader.
pub fn from_cursor(mut reader: R, archive_format: ArchiveFormat) -> Extract<R> {
if reader.seek(io::SeekFrom::Start(0)).is_err() {
eprintln!("Could not seek to start of the file");
}
Extract {
reader,
archive_format,
}
}
/// Specify an archive format of the source being extracted. If not specified, the
/// archive format will determined from the file extension.
pub fn archive_format(&mut self, format: ArchiveFormat) -> &mut Self {
self.archive_format = Some(format);
self
/// Get the archive content.
pub fn files(&mut self) -> crate::api::Result<Vec<PathBuf>> {
let reader = &mut self.reader;
let mut all_files = Vec::new();
if reader.seek(io::SeekFrom::Start(0)).is_err() {
eprintln!("Could not seek to start of the file");
}
match self.archive_format {
ArchiveFormat::Plain(compression) | ArchiveFormat::Tar(compression) => {
let reader = Self::get_archive_reader(reader, compression);
match self.archive_format {
ArchiveFormat::Tar(_) => {
let mut archive = tar::Archive::new(reader);
for entry in archive.entries()?.flatten() {
if let Ok(path) = entry.path() {
all_files.push(path.to_path_buf());
}
}
}
_ => unreachable!(),
};
}
ArchiveFormat::Zip => {
let archive = zip::ZipArchive::new(reader)?;
for entry in archive.file_names() {
all_files.push(PathBuf::from(entry));
}
}
}
Ok(all_files)
}
// Get the reader based on the compression type.
fn get_archive_reader(
source: fs::File,
source: &mut R,
compression: Option<Compression>,
) -> Either<fs::File, flate2::read::GzDecoder<fs::File>> {
) -> Either<&mut R, flate2::read::GzDecoder<&mut R>> {
if source.seek(io::SeekFrom::Start(0)).is_err() {
eprintln!("Could not seek to start of the file");
}
match compression {
Some(Compression::Gz) => Either::Right(flate2::read::GzDecoder::new(source)),
None => Either::Left(source),
@@ -80,17 +99,15 @@ impl<'a> Extract<'a> {
/// Extract an entire source archive into a specified path. If the source is a single compressed
/// file and not an archive, it will be extracted into a file with the same name inside of
/// `into_dir`.
pub fn extract_into(&self, into_dir: &path::Path) -> crate::api::Result<()> {
let source = fs::File::open(self.source)?;
let archive = self
.archive_format
.unwrap_or_else(|| detect_archive_type(self.source));
match archive {
pub fn extract_into(&mut self, into_dir: &path::Path) -> crate::api::Result<()> {
let reader = &mut self.reader;
if reader.seek(io::SeekFrom::Start(0)).is_err() {
eprintln!("Could not seek to start of the file");
}
match self.archive_format {
ArchiveFormat::Plain(compression) | ArchiveFormat::Tar(compression) => {
let mut reader = Self::get_archive_reader(source, compression);
match archive {
let mut reader = Self::get_archive_reader(reader, compression);
match self.archive_format {
ArchiveFormat::Plain(_) => {
match fs::create_dir_all(into_dir) {
Ok(_) => (),
@@ -100,12 +117,8 @@ impl<'a> Extract<'a> {
}
}
}
let file_name = self.source.file_name().ok_or_else(|| {
crate::api::Error::Extract("Extractor source has no file-name".into())
})?;
let mut out_path = into_dir.join(file_name);
out_path.set_extension("");
let mut out_file = fs::File::create(&out_path)?;
let mut out_file = fs::File::create(&into_dir)?;
io::copy(&mut reader, &mut out_file)?;
}
ArchiveFormat::Tar(_) => {
@@ -115,8 +128,9 @@ impl<'a> Extract<'a> {
_ => unreachable!(),
};
}
ArchiveFormat::Zip => {
let mut archive = zip::ZipArchive::new(source)?;
let mut archive = zip::ZipArchive::new(reader)?;
for i in 0..archive.len() {
let mut file = archive.by_index(i)?;
// Decode the file name from raw bytes instead of using file.name() directly.
@@ -144,31 +158,27 @@ impl<'a> Extract<'a> {
}
}
}
};
}
Ok(())
}
/// Extract a single file from a source and save to a file of the same name in `into_dir`.
/// If the source is a single compressed file, it will be saved with the name `file_to_extract`
/// in the specified `into_dir`.
/// Extract a single file from a source and extract it `into_path`.
/// If it's a directory, the target will be created, if it's a file, it'll be extracted at this location.
/// Note: You need to include the complete path, with file name and extension.
pub fn extract_file<T: AsRef<path::Path>>(
&self,
into_dir: &path::Path,
&mut self,
into_path: &path::Path,
file_to_extract: T,
) -> crate::api::Result<()> {
let file_to_extract = file_to_extract.as_ref();
let source = fs::File::open(self.source)?;
let archive = self
.archive_format
.unwrap_or_else(|| detect_archive_type(self.source));
let reader = &mut self.reader;
match archive {
match self.archive_format {
ArchiveFormat::Plain(compression) | ArchiveFormat::Tar(compression) => {
let mut reader = Self::get_archive_reader(source, compression);
match archive {
let mut reader = Self::get_archive_reader(reader, compression);
match self.archive_format {
ArchiveFormat::Plain(_) => {
match fs::create_dir_all(into_dir) {
match fs::create_dir_all(into_path) {
Ok(_) => (),
Err(e) => {
if e.kind() != io::ErrorKind::AlreadyExists {
@@ -176,11 +186,7 @@ impl<'a> Extract<'a> {
}
}
}
let file_name = file_to_extract.file_name().ok_or_else(|| {
crate::api::Error::Extract("Extractor source has no file-name".into())
})?;
let out_path = into_dir.join(file_name);
let mut out_file = fs::File::create(&out_path)?;
let mut out_file = fs::File::create(into_path)?;
io::copy(&mut reader, &mut out_file)?;
}
ArchiveFormat::Tar(_) => {
@@ -195,7 +201,27 @@ impl<'a> Extract<'a> {
file_to_extract
))
})?;
entry.unpack_in(into_dir)?;
// determine if it's a file or a directory
if entry.header().entry_type() == tar::EntryType::Directory {
// this is a directory, lets create it
match fs::create_dir_all(into_path) {
Ok(_) => (),
Err(e) => {
if e.kind() != io::ErrorKind::AlreadyExists {
return Err(e.into());
}
}
}
} else {
let mut out_file = fs::File::create(into_path)?;
io::copy(&mut entry, &mut out_file)?;
// make sure we set permissions
if let Ok(mode) = entry.header().mode() {
set_perms(into_path, Some(&mut out_file), mode, true)?;
}
}
}
_ => {
panic!("Unreasonable code");
@@ -203,16 +229,87 @@ impl<'a> Extract<'a> {
};
}
ArchiveFormat::Zip => {
let mut archive = zip::ZipArchive::new(source)?;
let mut archive = zip::ZipArchive::new(reader)?;
let mut file = archive.by_name(
file_to_extract
.to_str()
.expect("Could not convert file to str"),
)?;
let mut output = fs::File::create(into_dir.join(file.name()))?;
io::copy(&mut file, &mut output)?;
if file.is_dir() {
// this is a directory, lets create it
match fs::create_dir_all(into_path) {
Ok(_) => (),
Err(e) => {
if e.kind() != io::ErrorKind::AlreadyExists {
return Err(e.into());
}
}
}
} else {
let mut out_file = fs::File::create(into_path)?;
io::copy(&mut file, &mut out_file)?;
}
}
};
}
Ok(())
}
}
fn set_perms(
dst: &Path,
f: Option<&mut std::fs::File>,
mode: u32,
preserve: bool,
) -> crate::api::Result<()> {
_set_perms(dst, f, mode, preserve).map_err(|_| {
crate::api::Error::Extract(format!(
"failed to set permissions to {:o} \
for `{}`",
mode,
dst.display()
))
})
}
#[cfg(unix)]
fn _set_perms(
dst: &Path,
f: Option<&mut std::fs::File>,
mode: u32,
preserve: bool,
) -> io::Result<()> {
use std::os::unix::prelude::*;
let mode = if preserve { mode } else { mode & 0o777 };
let perm = fs::Permissions::from_mode(mode as _);
match f {
Some(f) => f.set_permissions(perm),
None => fs::set_permissions(dst, perm),
}
}
#[cfg(windows)]
fn _set_perms(
dst: &Path,
f: Option<&mut std::fs::File>,
mode: u32,
_preserve: bool,
) -> io::Result<()> {
if mode & 0o200 == 0o200 {
return Ok(());
}
match f {
Some(f) => {
let mut perm = f.metadata()?.permissions();
perm.set_readonly(true);
f.set_permissions(perm)
}
None => {
let mut perm = fs::metadata(dst)?.permissions();
perm.set_readonly(true);
fs::set_permissions(dst, perm)
}
}
}

View File

@@ -23,6 +23,7 @@ use std::{
sync::Arc,
};
#[derive(Clone, Debug)]
pub struct SafePathBuf(std::path::PathBuf);
impl AsRef<Path> for SafePathBuf {
@@ -332,9 +333,7 @@ fn resolve_path<R: Runtime>(
#[cfg(test)]
mod tests {
use std::path::SafePathBuf;
use super::{BaseDirectory, DirOperationOptions, FileOperationOptions};
use super::{BaseDirectory, DirOperationOptions, FileOperationOptions, SafePathBuf};
use quickcheck::{Arbitrary, Gen};
impl Arbitrary for BaseDirectory {
@@ -364,6 +363,12 @@ mod tests {
}
}
impl Arbitrary for SafePathBuf {
fn arbitrary(g: &mut Gen) -> Self {
SafePathBuf(std::path::PathBuf::arbitrary(g))
}
}
#[tauri_macros::module_command_test(fs_read_file, "fs > readFile")]
#[quickcheck_macros::quickcheck]
fn read_file(path: SafePathBuf, options: Option<FileOperationOptions>) {

View File

@@ -3,38 +3,37 @@
// SPDX-License-Identifier: MIT
use super::error::{Error, Result};
use crate::{
api::{file::Extract, version},
Env,
use crate::api::{
file::{ArchiveFormat, Extract, Move},
http::{ClientBuilder, HttpRequestBuilder},
version,
};
use base64::decode;
use http::StatusCode;
use minisign_verify::{PublicKey, Signature};
use tauri_utils::platform::current_exe;
use tauri_utils::Env;
use std::{
collections::HashMap,
env,
ffi::OsStr,
fs::{read_dir, remove_file, File, OpenOptions},
io::{prelude::*, BufReader, Read},
io::{Cursor, Read, Seek},
path::{Path, PathBuf},
str::from_utf8,
time::{SystemTime, UNIX_EPOCH},
};
#[cfg(target_os = "macos")]
use std::fs::rename;
#[cfg(not(target_os = "macos"))]
use std::process::Command;
use std::ffi::OsStr;
#[cfg(target_os = "macos")]
use crate::api::file::Move;
use crate::api::http::{ClientBuilder, HttpRequestBuilder};
#[cfg(not(target_os = "windows"))]
use crate::api::file::Compression;
#[cfg(target_os = "windows")]
use std::process::exit;
use std::{
fs::read_dir,
process::{exit, Command},
};
#[cfg(target_os = "windows")]
use tauri_utils::platform::current_exe;
#[derive(Debug)]
pub struct RemoteRelease {
@@ -252,21 +251,14 @@ impl<'a> UpdateBuilder<'a> {
let current_version = self.current_version;
// If no executable path provided, we use current_exe from rust
let executable_path = if let Some(v) = &self.executable_path {
v.clone()
} else {
// we expect it to fail if we can't find the executable path
// without this path we can't continue the update process.
current_exe()?
};
let executable_path = self.executable_path.unwrap_or(env::current_exe()?);
// Did the target is provided by the config?
// Should be: linux, darwin, win32 or win64
let target = if let Some(t) = &self.target {
t.clone()
} else {
get_updater_target().ok_or(Error::UnsupportedPlatform)?
};
let target = self
.target
.or_else(get_updater_target)
.ok_or(Error::UnsupportedPlatform)?;
// Get the extract_path from the provided executable_path
let extract_path = extract_path_from_executable(&self.env, &executable_path);
@@ -319,7 +311,10 @@ impl<'a> UpdateBuilder<'a> {
if let Ok(res) = resp {
let res = res.read().await?;
// got status code 2XX
if StatusCode::from_u16(res.status).unwrap().is_success() {
if StatusCode::from_u16(res.status)
.map_err(|e| Error::Builder(e.to_string()))?
.is_success()
{
// if we got 204
if StatusCode::NO_CONTENT.as_u16() == res.status {
// return with `UpToDate` error
@@ -413,9 +408,9 @@ impl Update {
// @todo(lemarier): Split into download and install (two step) but need to be thread safe
pub async fn download_and_install(&self, pub_key: Option<String>) -> Result {
// download url for selected release
let url = self.download_url.clone();
let url = self.download_url.as_str();
// extract path
let extract_path = self.extract_path.clone();
let extract_path = &self.extract_path;
// make sure we can install the update on linux
// We fail here because later we can add more linux support
@@ -427,31 +422,6 @@ impl Update {
return Err(Error::UnsupportedPlatform);
}
// used for temp file name
// if we cant extract app name, we use unix epoch duration
let current_time = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("Unable to get Unix Epoch")
.subsec_nanos()
.to_string();
// get the current app name
let bin_name = current_exe()
.ok()
.and_then(|pb| pb.file_name().map(|s| s.to_os_string()))
.and_then(|s| s.into_string().ok())
.unwrap_or_else(|| current_time.clone());
// tmp dir for extraction
let tmp_dir = tempfile::Builder::new()
.prefix(&format!("{}_{}_download", bin_name, current_time))
.tempdir()?;
// tmp directories are used to create backup of current application
// if something goes wrong, we can restore to previous state
let tmp_archive_path = tmp_dir.path().join(detect_archive_in_url(&url));
let mut tmp_archive = File::create(&tmp_archive_path)?;
// set our headers
let mut headers = HashMap::new();
headers.insert("Accept".into(), "application/octet-stream".into());
@@ -461,7 +431,7 @@ impl Update {
let resp = ClientBuilder::new()
.build()?
.send(
HttpRequestBuilder::new("GET", &url)?
HttpRequestBuilder::new("GET", url)?
.headers(headers)
// wait 20sec for the firewall
.timeout(20),
@@ -471,39 +441,40 @@ impl Update {
.await?;
// make sure it's success
if !StatusCode::from_u16(resp.status).unwrap().is_success() {
if !StatusCode::from_u16(resp.status)
.map_err(|e| Error::Network(e.to_string()))?
.is_success()
{
return Err(Error::Network(format!(
"Download request failed with status: {}",
resp.status
)));
}
tmp_archive.write_all(&resp.data)?;
// create memory buffer from our archive (Seek + Read)
let mut archive_buffer = Cursor::new(resp.data);
// Validate signature ONLY if pubkey is available in tauri.conf.json
if let Some(pub_key) = pub_key {
// We need an announced signature by the server
// if there is no signature, bail out.
if let Some(signature) = self.signature.clone() {
if let Some(signature) = &self.signature {
// we make sure the archive is valid and signed with the private key linked with the publickey
verify_signature(&tmp_archive_path, signature, &pub_key)?;
verify_signature(&mut archive_buffer, signature, &pub_key)?;
} else {
// We have a public key inside our source file, but not announced by the server,
// we assume this update is NOT valid.
return Err(Error::PubkeyButNoSignature);
}
}
// extract using tauri api inside a tmp path
Extract::from_source(&tmp_archive_path).extract_into(tmp_dir.path())?;
// Remove archive (not needed anymore)
remove_file(&tmp_archive_path)?;
// we copy the files depending of the operating system
// we run the setup, appimage re-install or overwrite the
// macos .app
#[cfg(target_os = "windows")]
copy_files_and_run(tmp_dir, extract_path, self.with_elevated_task)?;
copy_files_and_run(archive_buffer, extract_path, self.with_elevated_task)?;
#[cfg(not(target_os = "windows"))]
copy_files_and_run(tmp_dir, extract_path)?;
copy_files_and_run(archive_buffer, extract_path)?;
// We are done!
Ok(())
}
@@ -520,25 +491,28 @@ impl Update {
// the extract_path is the current AppImage path
// tmp_dir is where our new AppImage is found
#[cfg(target_os = "linux")]
fn copy_files_and_run(tmp_dir: tempfile::TempDir, extract_path: PathBuf) -> Result {
// we delete our current AppImage (we'll create a new one later)
remove_file(&extract_path)?;
fn copy_files_and_run<R: Read + Seek>(archive_buffer: R, extract_path: &Path) -> Result {
let tmp_dir = tempfile::Builder::new()
.prefix("tauri_current_app")
.tempdir()?;
// In our tempdir we expect 1 directory (should be the <app>.app)
let paths = read_dir(&tmp_dir)?;
let tmp_app_image = &tmp_dir.path().join("current_app.AppImage");
for path in paths {
let found_path = path?.path();
// make sure it's our .AppImage
if found_path.extension() == Some(OsStr::new("AppImage")) {
// Simply overwrite our AppImage (we use the command)
// because it prevent failing of bytes stream
Command::new("mv")
.arg("-f")
.arg(&found_path)
.arg(&extract_path)
.status()?;
// create a backup of our current app image
Move::from_source(extract_path).to_dest(tmp_app_image)?;
// extract the buffer to the tmp_dir
// we extract our signed archive into our final directory without any temp file
let mut extractor =
Extract::from_cursor(archive_buffer, ArchiveFormat::Tar(Some(Compression::Gz)));
for file in extractor.files()? {
if file.extension() == Some(OsStr::new("AppImage")) {
// if something went wrong during the extraction, we should restore previous app
if let Err(err) = extractor.extract_file(extract_path, &file) {
Move::from_source(tmp_app_image).to_dest(extract_path)?;
return Err(Error::Extract(err.to_string()));
}
// early finish we have everything we need here
return Ok(());
}
@@ -563,17 +537,31 @@ fn copy_files_and_run(tmp_dir: tempfile::TempDir, extract_path: PathBuf) -> Resu
// Update server can provide a custom EXE (installer) who can run any task.
#[cfg(target_os = "windows")]
#[allow(clippy::unnecessary_wraps)]
fn copy_files_and_run(
tmp_dir: tempfile::TempDir,
_extract_path: PathBuf,
fn copy_files_and_run<R: Read + Seek>(
archive_buffer: R,
_extract_path: &Path,
with_elevated_task: bool,
) -> Result {
use crate::api::file::Move;
// FIXME: We need to create a memory buffer with the MSI and then run it.
// (instead of extracting the MSI to a temp path)
//
// The tricky part is the MSI need to be exposed and spawned so the memory allocation
// shouldn't drop but we should be able to pass the reference so we can drop it once the installation
// is done, otherwise we have a huge memory leak.
let tmp_dir = tempfile::Builder::new().tempdir()?.into_path();
// extract the buffer to the tmp_dir
// we extract our signed archive into our final directory without any temp file
let mut extractor = Extract::from_cursor(archive_buffer, ArchiveFormat::Zip);
// extract the msi
extractor.extract_into(&tmp_dir);
let paths = read_dir(&tmp_dir)?;
// This consumes the TempDir without deleting directory on the filesystem,
// meaning that the directory will no longer be automatically deleted.
let tmp_path = tmp_dir.into_path();
for path in paths {
let found_path = path?.path();
// we support 2 type of files exe & msi for now
@@ -604,7 +592,7 @@ fn copy_files_and_run(
{
if status.success() {
// Rename the MSI to the match file name the Skip UAC task is expecting it to be
let temp_msi = tmp_path.with_file_name(bin_name).with_extension("msi");
let temp_msi = tmp_dir.with_file_name(bin_name).with_extension("msi");
Move::from_source(&found_path)
.to_dest(&temp_msi)
.expect("Unable to move update MSI");
@@ -641,17 +629,6 @@ fn copy_files_and_run(
Ok(())
}
// Get the current app name in the path
// Example; `/Applications/updater-example.app/Contents/MacOS/updater-example`
// Should return; `updater-example.app`
#[cfg(target_os = "macos")]
fn macos_app_name_in_path(extract_path: &Path) -> String {
let components = extract_path.components();
let app_name = components.last().unwrap();
let app_name = app_name.as_os_str().to_str().unwrap();
app_name.to_string()
}
// MacOS
// ### Expected structure:
// ├── [AppName]_[version]_x64.app.tar.gz # GZ generated by tauri-bundler
@@ -660,41 +637,44 @@ fn macos_app_name_in_path(extract_path: &Path) -> String {
// │ └── ...
// └── ...
#[cfg(target_os = "macos")]
fn copy_files_and_run(tmp_dir: tempfile::TempDir, extract_path: PathBuf) -> Result {
// In our tempdir we expect 1 directory (should be the <app>.app)
let paths = read_dir(&tmp_dir)?;
fn copy_files_and_run<R: Read + Seek>(archive_buffer: R, extract_path: &Path) -> Result {
let mut extracted_files: Vec<PathBuf> = Vec::new();
// current app name in /Applications/<app>.app
let app_name = macos_app_name_in_path(&extract_path);
// extract the buffer to the tmp_dir
// we extract our signed archive into our final directory without any temp file
let mut extractor =
Extract::from_cursor(archive_buffer, ArchiveFormat::Tar(Some(Compression::Gz)));
// the first file in the tar.gz will always be
// <app_name>/Contents
let all_files = extractor.files()?;
let tmp_dir = tempfile::Builder::new()
.prefix("tauri_current_app")
.tempdir()?;
for path in paths {
let mut found_path = path?.path();
// make sure it's our .app
if found_path.extension() == Some(OsStr::new("app")) {
let found_app_name = macos_app_name_in_path(&found_path);
// make sure the app name in the archive matche the installed app name on path
if found_app_name != app_name {
// we need to replace the app name in the updater archive to match
// installed app name
let new_path = found_path.parent().unwrap().join(app_name);
rename(&found_path, &new_path)?;
// create backup of our current app
Move::from_source(extract_path).to_dest(&tmp_dir.path())?;
found_path = new_path;
// extract all the files
for file in all_files {
// skip the first folder (should be the app name)
let collected_path: PathBuf = file.iter().skip(1).collect();
let extraction_path = extract_path.join(collected_path);
// if something went wrong during the extraction, we should restore previous app
if let Err(err) = extractor.extract_file(&extraction_path, &file) {
for file in extracted_files {
// delete all the files we extracted
if file.is_dir() {
std::fs::remove_dir(file)?;
} else {
std::fs::remove_file(file)?;
}
}
let sandbox_app_path = tempfile::Builder::new()
.prefix("tauri_current_app_sandbox")
.tempdir()?;
// Replace the whole application to make sure the
// code signature is following
Move::from_source(&found_path)
.replace_using_temp(sandbox_app_path.path())
.to_dest(&extract_path)?;
// early finish we have everything we need here
return Ok(());
Move::from_source(&tmp_dir.path()).to_dest(extract_path)?;
return Err(Error::Extract(err.to_string()));
}
extracted_files.push(extraction_path);
}
Ok(())
@@ -761,30 +741,6 @@ pub fn extract_path_from_executable(env: &Env, executable_path: &Path) -> PathBu
extract_path
}
// Return the archive type to save on disk
fn detect_archive_in_url(path: &str) -> String {
path
.split('/')
.next_back()
.unwrap_or(&default_archive_name_by_os())
.to_string()
}
// Fallback archive name by os
// The main objective is to provide the right extension based on the target
// if we cant extract the archive type in the url we'll fallback to this value
fn default_archive_name_by_os() -> String {
#[cfg(target_os = "windows")]
{
"update.zip".into()
}
#[cfg(not(target_os = "windows"))]
{
"update.tar.gz".into()
}
}
// Convert base64 to string and prevent failing
fn base64_to_string(base64_string: &str) -> Result<String> {
let decoded_string = &decode(base64_string)?;
@@ -795,29 +751,28 @@ fn base64_to_string(base64_string: &str) -> Result<String> {
// Validate signature
// need to be public because its been used
// by our tests in the bundler
pub fn verify_signature(
archive_path: &Path,
release_signature: String,
//
// NOTE: The buffer position is not reset.
pub fn verify_signature<R>(
archive_reader: &mut R,
release_signature: &str,
pub_key: &str,
) -> Result<bool> {
) -> Result<bool>
where
R: Read,
{
// we need to convert the pub key
let pub_key_decoded = &base64_to_string(pub_key)?;
let public_key = PublicKey::decode(pub_key_decoded)?;
let signature_base64_decoded = base64_to_string(&release_signature)?;
let pub_key_decoded = base64_to_string(pub_key)?;
let public_key = PublicKey::decode(&pub_key_decoded)?;
let signature_base64_decoded = base64_to_string(release_signature)?;
let signature = Signature::decode(&signature_base64_decoded)?;
// We need to open the file and extract the datas to make sure its not corrupted
let file_open = OpenOptions::new().read(true).open(&archive_path)?;
let mut file_buff: BufReader<File> = BufReader::new(file_open);
// read all bytes since EOF in the buffer
let mut data = vec![];
file_buff.read_to_end(&mut data)?;
// read all bytes until EOF in the buffer
let mut data = Vec::new();
archive_reader.read_to_end(&mut data)?;
// Validate signature or bail out
public_key.verify(&data, &signature, false)?;
public_key.verify(&data, &signature, true)?;
Ok(true)
}
@@ -825,11 +780,7 @@ pub fn verify_signature(
mod test {
use super::*;
#[cfg(target_os = "macos")]
use std::env::current_exe;
#[cfg(target_os = "macos")]
use std::fs::File;
#[cfg(target_os = "macos")]
use std::path::Path;
use std::{env, fs::File};
macro_rules! block {
($e:expr) => {
@@ -908,18 +859,6 @@ mod test {
}"#.into()
}
#[cfg(target_os = "macos")]
#[test]
fn test_app_name_in_path() {
let executable = extract_path_from_executable(
&crate::Env::default(),
Path::new("/Applications/updater-example.app/Contents/MacOS/updater-example"),
);
let app_name = macos_app_name_in_path(&executable);
assert!(executable.ends_with("updater-example.app"));
assert_eq!(app_name, "updater-example.app".to_string());
}
#[test]
fn simple_http_updater() {
let _m = mockito::mock("GET", "/")
@@ -1139,13 +1078,23 @@ mod test {
// run complete process on mac only for now as we don't have
// server (api) that we can use to test
#[cfg(target_os = "macos")]
#[test]
#[cfg(target_os = "macos")]
fn http_updater_complete_process() {
let good_archive_url = format!("{}/archive.tar.gz", mockito::server_url());
#[cfg(target_os = "macos")]
let archive_file = "archive.macos.tar.gz";
#[cfg(target_os = "linux")]
let archive_file = "archive.linux.tar.gz";
#[cfg(target_os = "windows")]
let archive_file = "archive.windows.zip";
let mut signature_file = File::open("./test/updater/fixture/archives/archive.tar.gz.sig")
.expect("Unable to open signature");
let good_archive_url = format!("{}/{}", mockito::server_url(), archive_file);
let mut signature_file = File::open(format!(
"./test/updater/fixture/archives/{}.sig",
archive_file
))
.expect("Unable to open signature");
let mut signature = String::new();
signature_file
.read_to_string(&mut signature)
@@ -1159,10 +1108,10 @@ mod test {
.expect("Unable to read signature as string");
// add sample file
let _m = mockito::mock("GET", "/archive.tar.gz")
let _m = mockito::mock("GET", format!("/{}", archive_file).as_str())
.with_status(200)
.with_header("content-type", "application/octet-stream")
.with_body_from_file("./test/updater/fixture/archives/archive.tar.gz")
.with_body_from_file(format!("./test/updater/fixture/archives/{}", archive_file))
.create();
// sample mock for update file
@@ -1179,7 +1128,7 @@ mod test {
// Build a tmpdir so we can test our extraction inside
// We dont want to overwrite our current executable or the directory
// Otherwise tests are failing...
let executable_path = current_exe().expect("Can't extract executable path");
let executable_path = env::current_exe().expect("Can't extract executable path");
let parent_path = executable_path
.parent()
.expect("Can't find the parent path");
@@ -1192,16 +1141,28 @@ mod test {
let tmp_dir_unwrap = tmp_dir.expect("Can't find tmp_dir");
let tmp_dir_path = tmp_dir_unwrap.path();
#[cfg(target_os = "linux")]
let my_executable = &tmp_dir_path.join("updater-example_0.1.0_amd64.AppImage");
#[cfg(target_os = "macos")]
let my_executable = &tmp_dir_path.join("my_app");
#[cfg(target_os = "windows")]
let my_executable = &tmp_dir_path.join("my_app.exe");
// configure the updater
let check_update = block!(builder(Default::default())
.url(mockito::server_url())
// It should represent the executable path, that's why we add my_app.exe in our
// test path -- in production you shouldn't have to provide it
.executable_path(&tmp_dir_path.join("my_app.exe"))
.executable_path(my_executable)
// make sure we force an update
.current_version("1.0.0")
.build());
#[cfg(target_os = "linux")]
{
env::set_var("APPIMAGE", my_executable);
}
// make sure the process worked
assert!(check_update.is_ok());
@@ -1219,8 +1180,14 @@ mod test {
// make sure the extraction went well (it should have skipped the main app.app folder)
// as we can't extract in /Applications directly
#[cfg(target_os = "macos")]
let bin_file = tmp_dir_path.join("Contents").join("MacOS").join("app");
let bin_file_exist = Path::new(&bin_file).exists();
assert!(bin_file_exist);
#[cfg(target_os = "linux")]
// linux should extract at same place as the executable path
let bin_file = my_executable;
#[cfg(target_os = "windows")]
let bin_file = tmp_dir_path.join("with").join("long").join("path.json");
assert!(bin_file.exists());
}
}

View File

@@ -38,6 +38,9 @@ pub enum Error {
/// Error building updater.
#[error("Unable to prepare the updater: {0}")]
Builder(String),
/// Error building updater.
#[error("Unable to extract the new version: {0}")]
Extract(String),
/// Updater is not supported for current operating system or platform.
#[error("Unsuported operating system or platform")]
UnsupportedPlatform,

View File

@@ -0,0 +1 @@
dW50cnVzdGVkIGNvbW1lbnQ6IHNpZ25hdHVyZSBmcm9tIHRhdXJpIHNlY3JldCBrZXkKUldSWTRKaFRZQmJER2NJZ21zb1RObVMwZmdkQUZ2OFFxM2dYdVhLRXpSNFp1VW03Zml2THlDRldpSmJPRkYyQlQ4QzltdkYxTG5MbzdWODZwdFN4aUNoWXY1V0lMem55T3djPQp0cnVzdGVkIGNvbW1lbnQ6IHRpbWVzdGFtcDoxNjI5OTE2MjIwCWZpbGU6YXJjaGl2ZS5saW51eC50YXIuZ3oKZXExb3FCVGlxVkc3YThBVklQeVdhS0psOWROT3p4aVcwaE1ZaE9SRWVFU2lBdnRTVFd1SnJ4QjlTWW90d045UXFZZ1VROE1mUFJhUWxaUjVSaXAwREE9PQo=

View File

@@ -1 +0,0 @@
dW90cnVzdGVkIGIvbW1lbnQ6IHNpZ25hdHVyZSBmcm1tIHRhdXJpIHNlY3JldCBrZXkKUldUTE3QzWxkQolZOVVDaC92ZnhXN0IrVm4rVW9GKzdoSFF6NEtFc3J3c004YUhQTFR0Njg5MGtuZkZqeVh1cTlwZ1dmWG9aSkx5d0t1WTBkS04wK1RBeEI2K2pka2tsT3drPQp0cnVzdGVkIGNvbW1lbnQ6IHRpbWVzdGFtcDoxNTkxODg2NjU2CWZpbGU6L1VzZXJzL2RhdmlkL2Rldi90YXVyaV91cGRhdGVyL3RhdXJpLXVwZGF0ZXIvdGVzdC9maXh0dXJlL2FyY2hpdmVzL2FyY2hpdmUudGFyLmd6CjNmMC9XdmYyzDtCM3ZoaWhEbHVVL08vV2tLejQ0Wlg5SkNGTys2N1ZMTHQvUENrK0svMlgzNE22UkQ0OG1sZ0RqTGZXNzA0OGxocmg4ODljM3BGOEJnPT0K