Here's my attempt at this, using row_number() with a partition. I've broken it into steps to hopefully make it easy to follow. If your table already has a column with integer identifiers in it, then you can omit the first CTE. Even after that, you might be able to simplify this further, but it does appear to work.
(Edited to add a flag indicating jobs with multiple ranges as requested in a comment.)
declare @sampleData table (JobNumber int, TimeOfWeigh datetime);
insert into @sampleData values
(100, '01/01/2014 08:00'),
(100, '01/01/2014 09:00'),
(100, '01/01/2014 10:00'),
(200, '01/01/2014 12:00'),
(200, '01/01/2014 13:00'),
(300, '01/01/2014 15:00'),
(300, '01/01/2014 16:00'),
(100, '02/01/2014 08:00'),
(100, '02/01/2014 09:00'),
(100, '03/01/2014 10:00');
-- The first CTE assigns an ordering to the records according to TimeOfWeigh,
-- producing the row numbers you gave in your example.
with JobsCTE as
(
select
row_number() over (order by TimeOfWeigh) as RowNumber,
JobNumber,
TimeOfWeigh
from @sampleData
),
-- The second CTE orders by the RowNumber we created above, but restarts the
-- ordering every time the JobNumber changes. The difference between RowNumber
-- and this new ordering will be constant within each group.
GroupsCTE as
(
select
RowNumber - row_number() over (partition by JobNumber order by RowNumber) as GroupNumber,
JobNumber,
TimeOfWeigh
from JobsCTE
),
-- Join by JobNumber alone to determine which jobs appear multiple times.
DuplicatedJobsCTE as
(
select JobNumber
from GroupsCTE
group by JobNumber
having count(distinct GroupNumber) > 1
)
-- Finally, we use GroupNumber to get the mins and maxes from contiguous ranges.
select
G.JobNumber,
min(G.TimeOfWeigh) as [First Weigh],
max(G.TimeOfWeigh) as [Last Weigh],
case when D.JobNumber is null then 0 else 1 end as [Multiple Ranges]
from
GroupsCTE G
left join DuplicatedJobsCTE D on G.JobNumber = D.JobNumber
group by
G.JobNumber,
G.GroupNumber,
D.JobNumber
order by
[First Weigh];