-
Notifications
You must be signed in to change notification settings - Fork 1.2k
check volumes for state when retrieving pool for configDrive creation #2709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
check volumes for state when retrieving pool for configDrive creation #2709
Conversation
| dataStore = _dataStoreMgr.getDataStore(volumes.get(0).getPoolId(), DataStoreRole.Primary); | ||
| } | ||
| final List<VolumeVO> volumes = _volumeDao.findByInstanceAndType(profile.getVirtualMachine().getId(), Volume.Type.ROOT); | ||
| if (volumes != null && volumes.size() > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can (should?) use CollectionUtils here though.
| case UploadError: | ||
| case UploadAbandoned: | ||
| default: | ||
| break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All above cases can be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but these are here for clarity.
|
Attempting to deploy a VM with a config drive on primary storage generates the following error: Asking ConfigDrive to release NicProfile[8-4-acef4b56-a86d-4519-affd-068869458240-null-null 2018-06-19 11:50:06,007 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-1:ctx-5b7ddba6 job-41/job-42 ctx-ef262bd0) (logid:1f612ec8) Deployment found - P0=VM[User|i-2-4-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(6|ROOT-->Pool(2))] |
| } | ||
| final List<VolumeVO> volumes = _volumeDao.findByInstanceAndType(profile.getVirtualMachine().getId(), Volume.Type.ROOT); | ||
| if (CollectionUtils.isNotEmpty(volumes)) { | ||
| dataStore = pickDataStoreFromVolumes(volumes); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may need this code, it's because when VM is freshly deployed/started the dest can have storage disks which are not allocated but only in the destination plan; therefore we scan those planned disks to find the root disk and then find the datastore for the root disk. Note at this stage those planned disks may not have been persisted in the db.
If an existing (say stopped) VM is started, and destination plan may not have them then we simply query the volumes for the VM (which should be available from the db).
I think NPE is hit in case of freshly started VM because planned volume may not have been allocated and persisted in the db. IMO, this PR is not blocking RC and the refactoring PR can be targetted for 4.11.2.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this is not blocking, but needs to be fixed as it is not taken into account that volumes may have a state of DESTROYED or EXPUNGING I'll be looking at a fix today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay @DaanHoogland, but if root volume is in destroyed or expunging that then VM is probably destroyed/expunged as well (probably corner case).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhtyd I agree. I am really wondering if this is a red herring or a needed fix.
|
@DaanHoogland I've tested and reviewed the PR, please see my notes in the diff section. |
|
whether this works or not we can do with an integration test for it. |
b05c16d to
0e04e69
Compare
| return dataStore; | ||
| } | ||
|
|
||
| private DataStore savelyPickExistingRootVolumeDataStore(VirtualMachineProfile profile, DataStore dataStore) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo? 'safely'
|
@borisstoyanov can you advise if it's passing manual config drive testing, wrt primary+secondary storage for hosting config drives? /cc @DaanHoogland - can you address any outstanding issues? |
|
@blueorangutan package |
|
@DaanHoogland a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress. |
|
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-2174 |
|
@blueorangutan test |
|
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests |
|
Trillian test result (tid-2848)
|
borisstoyanov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, based on test results and manual verification
…apache#2709) * only ask for the root volume, removing extensive query * better name
Description
When querying for a storage pool to create a config drive on, only pools for those volumes that have the right state should be considered.
Types of changes
GitHub Issue/PRs
Screenshots (if appropriate):
How Has This Been Tested?
Checklist:
Testing