Date | Commit message (Collapse) |
|
This allows us to implement "mog ls -l" much more efficiently
|
|
Otherwise it's outright painful to figure out what failed in a
pipeline...
|
|
This is useful for the "verbose" listing of keys since
we make a lot of file_info calls here. This API feels
very awkward, but I think it's unavoidable...
|
|
This is a followup to commit
55de4a3375793fa31993a1e9b4be777007bd31b8
(url_unescape: fix ordering of "+" => " " of swap)
|
|
|
|
Otherwise we'll be converting "%2B" into " " instead of
"+" when it appears in a file name.
|
|
We may use this for pipelining
|
|
|
|
|
|
|
|
This reverts commit f0ed5cb6ec6851a367175a93398e813e2d62667f.
Unnecessary paranoia, our Backend is more robust nowadays
by requiring CRLF before parsing
|
|
It makes sense for use with MogileFS::Pool (in case somebody
is pooling connections).
|
|
This restores 2.x behavior and is faster and safer since
we're smart enough to deal with failover. Also dropping
default @zone support since it doesn't seem that useful.
|
|
I shouldn't be allowed to code.
|
|
|
|
Trying to find a happy medium within Hoe while keeping
my preference for gmake and forcing wrongdoc on readers:
JavaScript and GUIs all suck :P
|
|
Each backend could have a different story to tell
|
|
We don't want to silently truncate data on our users,
that would be bad.
|
|
The return value of IO#wait is strange and confusing,
relying on FIONREAD is a waste of time anyways.
|
|
Should make tests easier to fix and debug.
|
|
We don't want to blindly retry on invalid keys and such,
only on invalid (truncated) responses, timeouts, and
syscall errors.
|
|
Found with 1.9.3
|
|
Read-only commands to the MogileFS tracker may be safely
retried if a request is sent but no response is received.
|
|
It should raise on an unknown key, of course.
|
|
I don't use any of its tools
|
|
Just like IO.copy_stream
|
|
|
|
get_paths may take a Hash for its optional arguments and
now supports the optional :pathcount argument.
There may now be a default @zone for MogileFS::MogileFS
objects as well (specified via :zone).
|
|
Using Array#map! instead of Array#map can save us at
least one object allocation.
|
|
At least my year-old Rubinius installation does not
|
|
Just borrow the code from 1.9.3
|
|
Not sure exactly what was causing it...
|
|
No need to unnecessarily trigger GC nor hit EMFILE/ENFILE
on VMs that rarely GC IO objects...
|
|
GET is all we need and we can save some code this way.
If we ever need to use HEAD much again, we can use
net/http/persistent since our internal HTTP classes
are only optimized for large responses.
|
|
This is only needed for users on old MogileFS servers
|
|
Since we'll have multiple tries, try to limit errors.
|
|
Oh well, not a big savings there anyways unlike the inner loop.
|
|
We'll now accept an optional argument which can be passed to
IO.copy_stream directly. This should make life easier on users
so they won't be exposed to our internals to make efficient
copies of large files.
|
|
Obviously I could go farther, but not at the expense of
readability. There are C libraries I could use, but MogileFS
may move to a JSON-based protocol in the future anyways...
|
|
We can use the file_info command to get things faster, now.
|
|
setup is expensive with integration test since we wait for the
monitor
|
|
classids can get recycled, it seems.
|
|
|
|
Our custom copy_stream needs to flush data like it does
under 1.9. 1.8 also can't do a blocking open(2) on a FIFO
in a native thread so we'll fork() instead.
|
|
This was added in MogileFS 2.45
|
|
This is a command added in MogileFS 2.45
|
|
It's conceivable we'd need it.
|
|
If a user tries to pipe something to us and we can't
rewind on failure, propagate that error all the way
up to avoid risking a corrupted upload.
|
|
We won't redefine the "new" singleton method since that
conflicts with existing usage.
|
|
Ruby 1.9.3 considers them harmful
|