سؤال

Given a matrix where 1 is the current subset

test =

     0     0     0     0     0     0
     0     0     0     0     0     0
     0     0     1     1     0     0
     0     0     1     1     0     0
     0     0     0     0     0     0
     0     0     0     0     0     0

Is there a function, or quick method to get change the subset to the boundary of the current subset?

Eg. Get this subset from 'test' above

test =

     0     0     0     0     0     0
     0     1     1     1     1     0
     0     1     0     0     1     0
     0     1     0     0     1     0
     0     1     1     1     1     0
     0     0     0     0     0     0

In the end I just want to get the minimum of the cells surrounding a subset of a matrix. Sure I could loop through and get the minimum of the boundary (cell by cell), but there must be a way to do it with the method i've shown above.

Note the subset WILL be connected, but may not be rectangular. This may be the big catch.

This is a possible subset.... (Would pad this with a NaN border)

test =

     0     0     0     0     0     0
     0     0     0     0     0     0
     0     0     1     1     0     0
     0     0     1     1     0     0
     0     0     1     1     1     1
     0     0     1     1     1     1

Ideas?

هل كانت مفيدة؟

المحلول

The basic steps I'd use are:

  1. Perform a dilation on the shape to get a new area which is the shape plus its boundary
  2. Subtract the original shape from the dilated shape to leave just the boundary
  3. Use the boundary to index your data matrix, then take the minimum.

Dilation

What I want to do here is pass a 3x3 window over each cell and take the maximum value in that window:

[m, n] = size(A); % assuming A is your original shape matrix
APadded = zeros(m + 2, n + 2);
APadded(2:end-1, 2:end-1) = A; % pad A with zeroes on each side
ADilated = zeros(m + 2, n + 2); % this will hold the dilated shape.

for i = 1:m
    for j = 1:n
        mask = zeros(size(APadded));
        mask(i:i+2, j:j+2) = 1; % this places a 3x3 square of 1's around (i, j)
        ADilated(i + 1, j + 1) = max(APadded(mask));
    end
end

Shape subtraction

This is basically a logical AND and a logical NOT to remove the intersection:

ABoundary = ADilated & (~APadded);

At this stage you may want to remove the border we added to do the dilation, since we don't need it any more.

ABoundary = ABoundary(2:end-1, 2:end-1);

Find the minimum data point along the boundary

We can use our logical boundary to index the original data into a vector, then just take the minimum of that vector.

dataMinimum = min(data(ABoundary));

نصائح أخرى

You should look at this as morphology problem, not set theory. This can be solved pretty easily with imdilate() (requires the image package). You basically only need to subtract the image to its dilation with a 3x3 matrix of 1.

octave> test = logical ([0  0  0  0  0  0
                         0  0  0  0  0  0
                         0  0  1  1  0  0
                         0  0  1  1  0  0
                         0  0  1  1  1  1
                         0  0  1  1  1  1]);
octave> imdilate (test, true (3)) - test
ans =

   0   0   0   0   0   0
   0   1   1   1   1   0
   0   1   0   0   1   0
   0   1   0   0   1   1
   0   1   0   0   0   0
   0   1   0   0   0   0

It does not, however, pads with NaN. If you really want that, you could pad your original matrix with false, do the operation, and then check if there's any true values in the border.

Note that you don't have to use logical() in which case you'll have to use ones() instead of true(). But that takes more memory and has worse performance.

EDIT: since you are trying to do it without using any matlab toolbox, take a look at the source of imdilate() in Octave. For the case of logical matrices (which is your case) it's a simple usage of filter2() which belongs to matlab core. That said, the following one line should work fine and be much faster

octave> (filter2 (true (3), test) > 0) - test
ans =

   0   0   0   0   0   0
   0   1   1   1   1   0
   0   1   0   0   1   0
   0   1   0   0   1   1
   0   1   0   0   0   0
   0   1   0   0   0   0

One possible solution is to take the subset and add it to the original matrix, but ensure that each time you add it, you offset its position by +1 row, -1 row and +1 column, -1 column. The result will then be expanded by one row and column all around the original subset. You then use the original matrix to mask the original subet to zero.

Like this:

test_new = test + ...
[[test(2:end,2:end);zeros(1,size(test,1)-1)],zeros(size(test,1),1)] + ... %move subset up-left
[[zeros(1,size(test,1)-1);test(1:end-1,2:end)],zeros(size(test,1),1)] + ... %move down-left
[zeros(size(test,1),1),[test(2:end,1:end-1);zeros(1,size(test,1)-1)]] + ... %move subset up-right
[zeros(size(test,1),1),[zeros(1,size(test,1)-1);test(1:end-1,1:end-1)]];  %move subset down-right

test_masked = test_new.*~test; %mask with original matrix
result = test_masked;
result(result>1)=1; % ensure that there is only 1's, not 2, 3, etc.

The result for this on your test matrix is:

result =

 0     0     0     0     0     0
 0     1     1     1     1     0
 0     1     0     0     1     0
 0     1     0     0     1     1
 0     1     0     0     0     0
 0     1     0     0     0     0

Edited - it now grabs the corners as well, by moving the subset up and to the left, up and to the right, down then left and down then right.

I expect this would be a very quick way to achieve this - it doesn't have any loops, nor functions - just matrix operations.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top