Transfers
Read whole files, use bounded concurrent read or write requests, observe transfer progress, and understand the current transfer limits.
Reading Files
Use readFile(_:,chunkSize:maxConcurrentReads:progress:) to fetch an entire file into memory:
let data = try await sftp.readFile("/etc/hostname")
let text = String(decoding: data, as: UTF8.self)
print(text)The default chunk size is 32 * 1024 bytes. You can adjust it:
let data = try await sftp.readFile("/var/log/system.log", chunkSize: 8 * 1024)Read behavior:
- Traversio opens the remote file handle for you
- reads sequentially until EOF by default
- closes the handle before returning
If you want a small amount of read-ahead on one handle, increase maxConcurrentReads:
let data = try await sftp.readFile(
"/var/log/system.log",
chunkSize: 32 * 1024,
maxConcurrentReads: 4
)That keeps at most four SSH_FXP_READ requests in flight on one handle and still reassembles the final file contents in order.
Use resumeDownloadFile(_:,existingData:,chunkSize:maxConcurrentReads:progress:) when you already have a trusted local prefix of the remote file and want Traversio to fetch only the remaining suffix:
let partial = Array("hello ".utf8)
let result = try await sftp.resumeDownloadFile(
"/var/log/traversio.log",
existingData: partial,
chunkSize: 32 * 1024,
maxConcurrentReads: 4
)
print(result.startingOffset)
print(result.bytesDownloaded)
print(String(decoding: result.data, as: UTF8.self))Current resume behavior:
- Traversio first runs
STATon the remote path - the helper compares the reported remote size with
existingData.count - if the remote file is larger, Traversio opens the file and starts reading at that offset
- if the local prefix already covers the full remote size, the helper returns immediately without opening a file handle
- if the remote size is smaller than the local prefix, the helper throws
SSHSFTPResumeError.remoteFileIsSmallerThanLocalData - if the server returns attributes without a file size, the helper throws
SSHSFTPResumeError.remoteFileSizeUnavailable
Writing Files
Use writeFile(_:,data:,chunkSize:maxConcurrentWrites:syncAfterWrite:progress:) for whole-file uploads:
let payload = Array("hello from traversio\n".utf8)
try await sftp.writeFile("/tmp/traversio-demo.txt", data: payload)Default write behavior:
- sequential chunked writes unless you raise
maxConcurrentWrites chunkSizedefaults to32 * 1024- the convenience path opens the file with
.write,.create, and.truncate
writeFile replaces existing contents unless you switch to a lower-level open-file workflow.
Use resumeUploadFile(_:,data:,chunkSize:maxConcurrentWrites:syncAfterWrite:progress:) when the remote path may already contain a prefix of the payload and you want Traversio to continue from the server-reported file size:
let payload = Array("hello from traversio\n".utf8)
let result = try await sftp.resumeUploadFile(
"/tmp/traversio-demo.txt",
data: payload
)
print(result.startingOffset)
print(result.bytesUploaded)
print(result.didResume)Current resume behavior:
- Traversio first runs
STATon the target path SSH_FX_NO_SUCH_FILEstarts the upload from offset0- an existing remote file resumes from its reported
size - Traversio opens the file with
.writeand.create, without.truncate - if the remote size is larger than the local payload, the helper throws
SSHSFTPResumeError.remoteFileIsLargerThanLocalData - if the server returns attributes without a file size, the helper throws
SSHSFTPResumeError.remoteFileSizeUnavailable
If you want a bounded amount of write pipelining on one handle, increase maxConcurrentWrites:
try await sftp.writeFile(
"/tmp/traversio-demo.txt",
data: payload,
chunkSize: 32 * 1024,
maxConcurrentWrites: 4
)That keeps at most four SSH_FXP_WRITE requests in flight at a time while still waiting for every status reply before the upload finishes.
If the server advertises OpenSSH [email protected] version 1, you can also ask Traversio to issue an explicit post-write durability request before the handle closes:
try await sftp.writeFile(
"/tmp/traversio-demo.txt",
data: payload,
syncAfterWrite: true
)If the extension is not advertised, Traversio fails the call instead of pretending the data was synced.
Working With File Handles
Use openFile(_:,flags:,attributes:) when you need offset-based reads or writes, handle-scoped metadata, or an explicit fsync step under your own control:
let handle = try await sftp.openFile(
"/tmp/traversio-demo.txt",
flags: [.read, .write, .create]
)
let prefix = try await handle.read(at: 0, length: 64)
print(prefix.map { String(decoding: $0, as: UTF8.self) } as Any)
try await handle.write(Array("tail\n".utf8), at: 5)
try await handle.synchronize()
let attributes = try await handle.stat()
print(attributes.permissions as Any)
try await handle.close()The public handle surface exposes:
read(at:length:)readAll(chunkSize:maxConcurrentReads:progress:)readChunks(startingAt:chunkSize:)write(_:at:)write(contentsOf:startingAt:progress:)stat()setAttributes(_:)fileSystemAttributes()synchronize()close()
readFile(...) and writeFile(...) remain the whole-file convenience wrappers. They are the best default when you do not need explicit handle control.
If you already have a handle and want the same bounded whole-file read helper there, use readAll(...):
let handle = try await sftp.openFile("/var/log/system.log")
let data = try await handle.readAll(
chunkSize: 32 * 1024,
maxConcurrentReads: 4
)
try await handle.close()Handle-Level Streaming
Use readChunks(startingAt:chunkSize:) when you want a caller-controlled chunk stream instead of collecting the whole file in memory:
let handle = try await sftp.openFile("/var/log/system.log")
for try await chunk in handle.readChunks(chunkSize: 8 * 1024) {
print(chunk.offset)
print(chunk.bytes.count)
}
try await handle.close()Each yielded SSHSFTPFileChunk carries:
- the remote
offsetused for that read request - the
bytesreturned for that chunk - convenience accessors like
countandendOffset
Use write(contentsOf:startingAt:progress:) when your upload source is already an AsyncSequence of byte chunks:
let handle = try await sftp.openFile(
"/tmp/traversio-stream.txt",
flags: [.write, .create, .truncate]
)
let stream = AsyncStream<[UInt8]> { continuation in
continuation.yield(Array("hello ".utf8))
continuation.yield(Array("world\n".utf8))
continuation.finish()
}
try await handle.write(contentsOf: stream)
try await handle.close()Streaming write behavior:
- chunks are written sequentially in the order your
AsyncSequenceyields them startingAt:lets you resume from a caller-chosen offset- progress is cumulative for the current streaming write call and leaves
totalBytesempty - handle-level streaming keeps ownership of open/close with the caller
Progress Callbacks
The whole-file convenience APIs can now report cumulative transfer progress through SSHSFTPTransferProgress.
Example:
let payload = Array(repeating: UInt8(ascii: "a"), count: 128 * 1024)
try await sftp.writeFile(
"/tmp/traversio-demo.txt",
data: payload,
chunkSize: 32 * 1024,
maxConcurrentWrites: 4,
progress: { progress in
print(progress.bytesTransferred)
print(progress.totalBytes as Any)
print(progress.fractionCompleted as Any)
}
)Current progress behavior:
readFile(...)andSFTPFileHandle.readAll(...)report cumulativebytesTransferredresumeDownloadFile(...)reports cumulative read progress against the full remote length, starting from the already-present local prefix when a resume actually happenswriteFile(...)reports cumulativebytesTransferredplustotalBytesresumeUploadFile(...)reports cumulative write progress against the full payload length, starting from the already-present remote prefix when a resume actually happenswrite(contentsOf:startingAt:progress:)reports cumulativebytesTransferredfor the streamed portion and leavestotalBytesempty- the callback runs on the transfer task, so expensive work inside the callback also becomes part of the transfer's pacing
- progress is reported after Traversio has successfully appended one read chunk or received one write status reply
A Small Round Trip
let file = "/tmp/traversio-demo.txt"
try await sftp.writeFile(file, data: Array("hello\n".utf8))
let roundTrip = try await sftp.readFile(file)
print(String(decoding: roundTrip, as: UTF8.self))Transfer Limits
This is an early but already useful client surface. Important limits:
- whole-file reads and uploads support a bounded number of concurrent SFTP requests on one handle
- handle-level streaming download and upload APIs are now part of the current release
- resumable whole-file upload and download are now part of the current release
- handle-scoped reads and writes use request/response operations with explicit offsets
syncAfterWritedepends on OpenSSH[email protected]- transfers observe task cancellation, and the broader graceful-cancellation contract is still being refined