CASSANALYTICS-147: BufferingInputStream fails to read last chunk#193
CASSANALYTICS-147: BufferingInputStream fails to read last chunk#193lukasz-antoniak wants to merge 3 commits intoapache:trunkfrom
Conversation
|
|
||
| int bytesToRead = chunkSize * numChunks; | ||
| long skipAhead = size - bytesToRead + 1; | ||
| long skipAhead = size - bytesToRead; |
There was a problem hiding this comment.
I am not sure how effective the change in BufferingInputStream would affect skip() used during BIG index reading. All integration tests pass though, and I think hereby unit test is just a simulation.
|
|
||
| // Deliver data in chunks until request is fulfilled | ||
| while (position < actualEnd) | ||
| while (position <= actualEnd) // range boundaries are inclusive |
There was a problem hiding this comment.
According to below JavaDoc, ranges should be considered inclusive.
/**
* Asynchronously request bytes for the SSTable file component in the range start-end, and pass on to the StreamConsumer when available.
* The start-end range is inclusive.
*
* @param start the start of the bytes range
* @param end the end of the bytes range
* @param consumer the StreamConsumer to return the bytes to when the request is complete
*/
void request(long start, long end, StreamConsumer consumer);
|
Seems like With this fix, end is now at most
|
| } | ||
|
|
||
| @Test | ||
| public void testUnalignedEndReading() throws IOException |
There was a problem hiding this comment.
Minor: We might want to assert on returnedBuffers.size() == 2 to catch regressions where extra or missing requests are issued
Fixes CASSANALYTICS-147.