Skip to content

Zero coverage when using hardhat_reset #574

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cgewecke opened this issue Nov 18, 2020 · 11 comments
Closed

Zero coverage when using hardhat_reset #574

cgewecke opened this issue Nov 18, 2020 · 11 comments

Comments

@cgewecke
Copy link
Member

Atm the tool reaches deep into the hardhat node to attach directly to the vm step event because hardhat's experimental trace hook seems to be missing pc-to-instruction data in some cases (calls to linked libraries, delegate calls, and/or factory contract deployments).

Per this HH PR comment is the vm destroyed on reset?

If so perhaps detect by HARDHAT_NETWORK_RESET_EVENT and re-attach.

@cgewecke
Copy link
Member Author

cgewecke commented Mar 1, 2021

This was reported in the wild in an HH discord comment mid-feb....

@cgewecke cgewecke added the bug label Mar 1, 2021
@cgewecke cgewecke changed the title Check for vm persistence across hardhat_reset Zero coverage when using hardhat_reset Mar 1, 2021
@alcuadrado
Copy link
Collaborator

Hey Chris, I just noticed this issue. We added an event to our providers that is triggered when a hardhat_reset is run. I think this would help a lot. It's called hardhatNetworkReset, and it's triggered both in the in-memory HH Network provider, and in our HTTP Provider when connected to a remote HH Network instance.

@cgewecke
Copy link
Member Author

cgewecke commented Mar 2, 2021

@alcuadrado Great! That sounds like exactly what's needed. Thanks, will fix.

@ajb413
Copy link

ajb413 commented Jun 5, 2021

Hello. Just ran into this issue myself. I am running a hardhat reset in my mocha beforeEach to get my fork back to a specific block for each individual test. My main contract file went down to 0% coverage. Does anyone know of a workaround?

@cgewecke
Copy link
Member Author

cgewecke commented Jun 5, 2021

@ajb413 Apologies, this really needs to get fixed. You should be able to use evm_snapshot and evm_revert instead of reset.

@ajb413
Copy link

ajb413 commented Jun 7, 2021

@cgewecke No need to apologize. Thank you so much for your workaround! It works perfectly. To help those that need this...

Old way, preferable, but does not work with solidity-coverage at the moment:

describe('Some Contract', function() {

  beforeEach(async () => {
    await resetForkedChain();
    // code ...
  });

  // tests...

});

// Does not work with solidity-coverage at the moment
async function resetForkedChain() {
  // Parent directory's hardhat.config.js needs these to be set
  const forkUrl = hre.config.networks.hardhat.forking.url;
  const forkBlockNumber = hre.config.networks.hardhat.forking.blockNumber;
  await hre.network.provider.request({
    method: 'hardhat_reset',
    params: [{
      forking: {
        jsonRpcUrl: forkUrl,
        blockNumber: forkBlockNumber
      }
    }]
  });
}

workaround in the meantime:

let snapshot;

describe('Some Contract', function() {

  before(async () => {
    // code ...
    await makeForkedChainSnapshot();
  });

  beforeEach(async () => {
    await resetForkedChain();
    await makeForkedChainSnapshot();
    // code ...
  });

  // tests...

});

async function resetForkedChain() {
  await hre.network.provider.request({
    method: 'evm_revert',
    params: [ snapshot ] // snapshot is global
  });
}

async function makeForkedChainSnapshot() {
  // snapshot is global
  snapshot = await hre.network.provider.request({ method: 'evm_snapshot' });
}

@0xGorilla
Copy link

0xGorilla commented Aug 26, 2021

Maybe this will help someone with using snapshots (since you can not revert to the same snapshot twice):

class SnapshotManager {
  snapshots: { [id: string]: string } = {};

  async take(): Promise<string> {
    const id = await this.takeSnapshot();
    this.snapshots[id] = id;
    return id;
  }

  async revert(id: string): Promise<void> {
    await this.revertSnapshot(this.snapshots[id]);
    this.snapshots[id] = await this.takeSnapshot();
  }

  private async takeSnapshot(): Promise<string> {
    return (await network.provider.request({
      method: 'evm_snapshot',
      params: [],
    })) as string;
  }

  private async revertSnapshot(id: string) {
    await network.provider.request({
      method: 'evm_revert',
      params: [id],
    });
  }
}

// if you want to use it as a singleton
export const snapshot = new SnapshotManager();

@adjisb
Copy link
Contributor

adjisb commented Sep 28, 2021

Any advice on how to take a snapshot of the initial state of the evm ?
The problem I see is that when the test run there is a lot of stuff already there.... for example what if a previous test eats all my balance ?

@adjisb
Copy link
Contributor

adjisb commented Sep 29, 2021

This fixes the issue on my side: #667

@cgewecke
Copy link
Member Author

Update: there's ongoing work to support hardhat_reset at #681. Unfortunately it's proving difficult to re-attach a VM step event listener to the newly instantiated hardhat node created after the reset event.

If anyone is interested in helping debug this issue further please feel free to experiment with #681 and open a PR against it. It has a simple failing unit test case.

@cgewecke
Copy link
Member Author

This should be resolve in 0.7.18

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants