New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DVDInterface: Chunking and various timing accuracy improvements #4829
Conversation
Thanks for doing this! I'll review shortly. Are we not using Reviewable anymore? |
{ | ||
// The amount of data the buffer contains *right now*, rounded to a DVD ECC block. | ||
buffer_end = s_read_buffer_start_offset + | ||
Common::AlignDown((current_time - s_read_buffer_start_time) * |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a code perspective, looks good to me with one minor comment about a constant.
Source/Core/Core/HW/DVDInterface.cpp
Outdated
@@ -1120,7 +1112,8 @@ void ExecuteCommand(u32 command_0, u32 command_1, u32 command_2, u32 output_addr | |||
// to simulate the speed of a real disc drive | |||
if (!command_handled_by_thread) | |||
{ | |||
CoreTiming::ScheduleEvent(ticks_until_completion, s_finish_executing_command, | |||
// This is an arbitrary delay - needs testing to determine the appropriate value | |||
CoreTiming::ScheduleEvent(SystemTimers::GetTicksPerSecond() / 15000, s_finish_executing_command, |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Bringing two old Reviewable discussions from @JosJuice here:
The drive didn't appear to be smart about optimizing for this. In fact, it was quite dumb in a lot of cases (it would throw away buffers when seeking backward, even if they overlapped). From my experimental data I never saw any evidence that it would wait for a streaming read if it was faster. I think it's safe to assume that this is not the case.
That's correct. I wasn't able to formulate an experiment to determine the seek time separate from the read time, so I just baked that read of one block into this number. We should add a short comment to that effect. |
675535f
to
a9ae891
Compare
I've added that information to the comment above the definition of CalculateSeekTime. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Pulling the test strategy from the other PR:
|
Source/Core/Core/State.cpp
Outdated
@@ -71,7 +71,7 @@ static Common::Event g_compressAndDumpStateSyncEvent; | |||
static std::thread g_save_thread; | |||
|
|||
// Don't forget to increase this after doing changes on the savestate system | |||
static const u32 STATE_VERSION = 73; // Last changed in PR 4651 | |||
static const u32 STATE_VERSION = 74; // Last changed in PR 3701 |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Splits DVD reads up into smaller chunks so that data is available before the final interrupt is triggered. This better simulates the DMA that happens on a real device, which some games will take advantage of - by either playing back data as it is loading or by using data that is going to be overwritten shortly by an outstanding read.
Everything looks good to me. checked the INIs over. |
Thanks for taking over and landing this, @JosJuice 🎉 🎉 🎉 |
This is a rebased and cleaned up version of PR #3701.
This PR has two primary improvements: Copying disc data to the emulated RAM in chunks instead of all in one go at the end (fixes various games, including Sonic Riders) and using more accurate timing models that take chunking and seek timing into account (makes loading times closer to consoles but doesn't fix any bugs as far as I know).
All comments on the old PR should be fixed
except for the ones that I made today.EDIT: Those two comments have been addressed now.