Not meaning to be rude, but this really doesn't look right. You can't just flip the screen upside down and sample from that point, on the flipped screen, light doesn't work that way, as can be seen in your second screenshot where the non-reflected cobble wall is way taller than the reflected one - with actual light, both would appear the same height to the player. I think the way screen-space reflection is traditionally done is to raycast based on the point on screen that's being rendered (you need the inverse projection matrix for this) and reflect along the surface normal, stepping the resultant vec3 until it intersects the depth buffer (this is the resultant point). Then, you take the color value based off the depth buffer, and use that as the reflected point. Needless to say, that's vastly more complex than a simple flip down the middle - and even this method has issues when the reflected ray hits off the screen, so to speak, making it impossible to actually sample a value. If you're interested, this looks like a good resource from a quick glance -
http://roar11.com/2015/07/screen-space-glossy-reflections/And, regardless, none of this data is even available currently with the shader system. I think it's a nice idea, but this certainly isn't a particularly great way to do it, not that I can really blame you - more engine support would be needed to fully achieve this.